Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250811 02:02:16.467199 12468 test_util.cc:276] Using random seed: 1343961587
W20250811 02:02:17.728894 12468 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.209s user 0.423s sys 0.784s
W20250811 02:02:17.729458 12468 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.210s user 0.423s sys 0.784s
I20250811 02:02:17.732156 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:02:17.732470 12468 ts_itest-base.cc:116] --------------
I20250811 02:02:17.732695 12468 ts_itest-base.cc:117] 4 tablet servers
I20250811 02:02:17.732924 12468 ts_itest-base.cc:118] 3 replicas per TS
I20250811 02:02:17.733124 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:02:17Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:02:17Z Disabled control of system clock
I20250811 02:02:17.791499 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:35087
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:43319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:35087 with env {}
W20250811 02:02:18.132244 12482 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:18.132997 12482 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:18.133507 12482 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:18.167564 12482 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:02:18.167920 12482 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:18.168226 12482 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:02:18.168490 12482 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:02:18.207851 12482 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:35087
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:35087
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:18.209651 12482 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:18.211848 12482 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:18.226557 12489 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:18.230886 12491 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:18.227666 12488 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:19.421725 12490 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250811 02:02:19.421751 12482 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:19.426462 12482 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:19.430225 12482 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:19.431720 12482 hybrid_clock.cc:648] HybridClock initialized: now 1754877739431662 us; error 68 us; skew 500 ppm
I20250811 02:02:19.432626 12482 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:19.440374 12482 webserver.cc:489] Webserver started at http://127.12.45.62:40911/ using document root <none> and password file <none>
I20250811 02:02:19.441522 12482 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:19.441756 12482 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:19.442298 12482 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:19.447561 12482 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c"
format_stamp: "Formatted at 2025-08-11 02:02:19 on dist-test-slave-xn5f"
I20250811 02:02:19.448912 12482 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c"
format_stamp: "Formatted at 2025-08-11 02:02:19 on dist-test-slave-xn5f"
I20250811 02:02:19.458099 12482 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.008s sys 0.002s
I20250811 02:02:19.464916 12498 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:19.466342 12482 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.004s
I20250811 02:02:19.466820 12482 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c"
format_stamp: "Formatted at 2025-08-11 02:02:19 on dist-test-slave-xn5f"
I20250811 02:02:19.467263 12482 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:19.524423 12482 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:19.526567 12482 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:19.527144 12482 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:19.622167 12482 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:35087
I20250811 02:02:19.622229 12549 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:35087 every 8 connection(s)
I20250811 02:02:19.625437 12482 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:02:19.627868 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 12482
I20250811 02:02:19.628641 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:02:19.632706 12550 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:19.654456 12550 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Bootstrap starting.
I20250811 02:02:19.663792 12550 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:19.666707 12550 log.cc:826] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:19.673460 12550 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: No bootstrap required, opened a new log
I20250811 02:02:19.695286 12550 raft_consensus.cc:357] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:19.696251 12550 raft_consensus.cc:383] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:02:19.696523 12550 raft_consensus.cc:738] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cc9b8991ee9e4ed9b58e7e606e014e9c, State: Initialized, Role: FOLLOWER
I20250811 02:02:19.697387 12550 consensus_queue.cc:260] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:19.697973 12550 raft_consensus.cc:397] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:02:19.698236 12550 raft_consensus.cc:491] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:02:19.698565 12550 raft_consensus.cc:3058] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:19.703652 12550 raft_consensus.cc:513] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:19.704754 12550 leader_election.cc:304] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: cc9b8991ee9e4ed9b58e7e606e014e9c; no voters:
I20250811 02:02:19.706907 12550 leader_election.cc:290] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:02:19.707619 12555 raft_consensus.cc:2802] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:02:19.709971 12555 raft_consensus.cc:695] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 LEADER]: Becoming Leader. State: Replica: cc9b8991ee9e4ed9b58e7e606e014e9c, State: Running, Role: LEADER
I20250811 02:02:19.710846 12555 consensus_queue.cc:237] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:19.712245 12550 sys_catalog.cc:564] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:02:19.725126 12556 sys_catalog.cc:455] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } } }
I20250811 02:02:19.726032 12556 sys_catalog.cc:458] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:19.725040 12557 sys_catalog.cc:455] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: SysCatalogTable state changed. Reason: New leader cc9b8991ee9e4ed9b58e7e606e014e9c. Latest consensus state: current_term: 1 leader_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } } }
I20250811 02:02:19.728061 12557 sys_catalog.cc:458] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:19.737071 12563 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:02:19.752661 12563 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:02:19.775588 12563 catalog_manager.cc:1349] Generated new cluster ID: d159ee3097354405870411b9ca756157
I20250811 02:02:19.775995 12563 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:02:19.793730 12563 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:02:19.795782 12563 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:02:19.815284 12563 catalog_manager.cc:5955] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Generated new TSK 0
I20250811 02:02:19.816967 12563 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:02:19.838483 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--builtin_ntp_servers=127.12.45.20:43319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 02:02:20.214319 12574 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:20.214906 12574 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:20.215469 12574 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:20.250828 12574 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:20.251794 12574 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:02:20.289903 12574 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:20.291522 12574 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:20.293345 12574 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:20.307289 12580 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:20.308564 12581 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:21.711627 12579 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 12574
W20250811 02:02:21.763137 12574 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.456s user 0.476s sys 0.894s
W20250811 02:02:21.764294 12574 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.457s user 0.476s sys 0.894s
W20250811 02:02:21.764837 12582 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1456 milliseconds
I20250811 02:02:21.766250 12574 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 02:02:21.766320 12583 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:21.769676 12574 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:21.771940 12574 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:21.773298 12574 hybrid_clock.cc:648] HybridClock initialized: now 1754877741773246 us; error 44 us; skew 500 ppm
I20250811 02:02:21.774087 12574 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:21.780409 12574 webserver.cc:489] Webserver started at http://127.12.45.1:46273/ using document root <none> and password file <none>
I20250811 02:02:21.781347 12574 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:21.781569 12574 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:21.782025 12574 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:21.786520 12574 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "0f8bf32767014b989614cd346c852b96"
format_stamp: "Formatted at 2025-08-11 02:02:21 on dist-test-slave-xn5f"
I20250811 02:02:21.787686 12574 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "0f8bf32767014b989614cd346c852b96"
format_stamp: "Formatted at 2025-08-11 02:02:21 on dist-test-slave-xn5f"
I20250811 02:02:21.795310 12574 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250811 02:02:21.801366 12590 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:21.802559 12574 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250811 02:02:21.802902 12574 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "0f8bf32767014b989614cd346c852b96"
format_stamp: "Formatted at 2025-08-11 02:02:21 on dist-test-slave-xn5f"
I20250811 02:02:21.803305 12574 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:21.866690 12574 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:21.868254 12574 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:21.868721 12574 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:21.871415 12574 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:21.876420 12574 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:21.876637 12574 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:21.876909 12574 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:21.877074 12574 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:22.072180 12574 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:37989
I20250811 02:02:22.072264 12702 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:37989 every 8 connection(s)
I20250811 02:02:22.075098 12574 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:02:22.077204 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 12574
I20250811 02:02:22.077713 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:02:22.085909 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--builtin_ntp_servers=127.12.45.20:43319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:02:22.105480 12703 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
I20250811 02:02:22.105966 12703 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:22.107101 12703 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:22.109766 12515 ts_manager.cc:194] Registered new tserver with Master: 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989)
I20250811 02:02:22.111965 12515 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:41873
W20250811 02:02:22.420992 12707 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:22.421495 12707 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:22.421943 12707 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:22.454027 12707 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:22.454916 12707 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:02:22.490110 12707 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:22.491547 12707 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:22.493209 12707 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:22.505383 12713 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:23.115547 12703 heartbeater.cc:499] Master 127.12.45.62:35087 was elected leader, sending a full tablet report...
W20250811 02:02:22.509249 12714 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:23.679035 12715 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
W20250811 02:02:23.680653 12716 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:23.680663 12707 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:23.683684 12707 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:23.685909 12707 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:23.687264 12707 hybrid_clock.cc:648] HybridClock initialized: now 1754877743687229 us; error 42 us; skew 500 ppm
I20250811 02:02:23.688069 12707 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:23.694224 12707 webserver.cc:489] Webserver started at http://127.12.45.2:35727/ using document root <none> and password file <none>
I20250811 02:02:23.695202 12707 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:23.695421 12707 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:23.695878 12707 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:23.700263 12707 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "02a3abbf0596474e9b9649fae51a44d5"
format_stamp: "Formatted at 2025-08-11 02:02:23 on dist-test-slave-xn5f"
I20250811 02:02:23.701368 12707 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "02a3abbf0596474e9b9649fae51a44d5"
format_stamp: "Formatted at 2025-08-11 02:02:23 on dist-test-slave-xn5f"
I20250811 02:02:23.708998 12707 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250811 02:02:23.714804 12723 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:23.715857 12707 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250811 02:02:23.716169 12707 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "02a3abbf0596474e9b9649fae51a44d5"
format_stamp: "Formatted at 2025-08-11 02:02:23 on dist-test-slave-xn5f"
I20250811 02:02:23.716511 12707 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:23.765149 12707 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:23.766636 12707 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:23.767088 12707 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:23.769565 12707 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:23.773592 12707 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:23.773819 12707 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:23.774057 12707 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:23.774216 12707 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:23.920063 12707 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:34017
I20250811 02:02:23.920167 12835 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:34017 every 8 connection(s)
I20250811 02:02:23.922700 12707 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:02:23.928946 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 12707
I20250811 02:02:23.929597 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 02:02:23.935640 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--builtin_ntp_servers=127.12.45.20:43319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:02:23.944301 12836 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
I20250811 02:02:23.944832 12836 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:23.946182 12836 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:23.948681 12515 ts_manager.cc:194] Registered new tserver with Master: 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017)
I20250811 02:02:23.949971 12515 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:37181
W20250811 02:02:24.242502 12840 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:24.243031 12840 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:24.243505 12840 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:24.274292 12840 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:24.275146 12840 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:02:24.309530 12840 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:24.311142 12840 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:24.312942 12840 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:24.324810 12846 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:24.954007 12836 heartbeater.cc:499] Master 127.12.45.62:35087 was elected leader, sending a full tablet report...
W20250811 02:02:24.325415 12847 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:25.546008 12849 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:25.550267 12848 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1223 milliseconds
W20250811 02:02:25.550977 12840 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.226s user 0.340s sys 0.882s
W20250811 02:02:25.551265 12840 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.226s user 0.344s sys 0.882s
I20250811 02:02:25.551484 12840 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:25.552564 12840 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:25.554961 12840 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:25.556437 12840 hybrid_clock.cc:648] HybridClock initialized: now 1754877745556400 us; error 56 us; skew 500 ppm
I20250811 02:02:25.557241 12840 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:25.564334 12840 webserver.cc:489] Webserver started at http://127.12.45.3:43893/ using document root <none> and password file <none>
I20250811 02:02:25.565335 12840 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:25.565573 12840 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:25.566110 12840 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:25.570617 12840 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "a5781ba736944d26b963e09d47365c0a"
format_stamp: "Formatted at 2025-08-11 02:02:25 on dist-test-slave-xn5f"
I20250811 02:02:25.571753 12840 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "a5781ba736944d26b963e09d47365c0a"
format_stamp: "Formatted at 2025-08-11 02:02:25 on dist-test-slave-xn5f"
I20250811 02:02:25.579598 12840 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250811 02:02:25.589821 12856 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:25.591480 12840 fs_manager.cc:730] Time spent opening block manager: real 0.009s user 0.009s sys 0.000s
I20250811 02:02:25.591848 12840 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "a5781ba736944d26b963e09d47365c0a"
format_stamp: "Formatted at 2025-08-11 02:02:25 on dist-test-slave-xn5f"
I20250811 02:02:25.592202 12840 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:25.692453 12840 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:25.694965 12840 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:25.695546 12840 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:25.700860 12840 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:25.708161 12840 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:25.708382 12840 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:25.708678 12840 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:25.708838 12840 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:26.166872 12840 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:36639
I20250811 02:02:26.167151 12968 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:36639 every 8 connection(s)
I20250811 02:02:26.169909 12840 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:02:26.178432 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 12840
I20250811 02:02:26.178983 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 02:02:26.188870 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.4:0
--local_ip_for_outbound_sockets=127.12.45.4
--webserver_interface=127.12.45.4
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--builtin_ntp_servers=127.12.45.20:43319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:02:26.220726 12969 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
I20250811 02:02:26.221549 12969 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:26.223775 12969 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:26.227931 12515 ts_manager.cc:194] Registered new tserver with Master: a5781ba736944d26b963e09d47365c0a (127.12.45.3:36639)
I20250811 02:02:26.229738 12515 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:35079
W20250811 02:02:26.667200 12973 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:26.667804 12973 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:26.668326 12973 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:26.701563 12973 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:26.702445 12973 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.4
I20250811 02:02:26.741497 12973 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.4:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.12.45.4
--webserver_port=0
--tserver_master_addrs=127.12.45.62:35087
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.4
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:26.743049 12973 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:26.744845 12973 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:26.763339 12980 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:27.235011 12969 heartbeater.cc:499] Master 127.12.45.62:35087 was elected leader, sending a full tablet report...
W20250811 02:02:26.766827 12981 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:28.154366 12983 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:28.156610 12973 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.386s user 0.002s sys 0.006s
W20250811 02:02:28.157056 12973 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.387s user 0.003s sys 0.007s
I20250811 02:02:28.157394 12973 server_base.cc:1047] running on GCE node
I20250811 02:02:28.159199 12973 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:28.163473 12973 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:28.165064 12973 hybrid_clock.cc:648] HybridClock initialized: now 1754877748164984 us; error 84 us; skew 500 ppm
I20250811 02:02:28.166437 12973 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:28.176348 12973 webserver.cc:489] Webserver started at http://127.12.45.4:45147/ using document root <none> and password file <none>
I20250811 02:02:28.177892 12973 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:28.178210 12973 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:28.178894 12973 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:28.186635 12973 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "ef9da64cb3804f50810e390969e9689b"
format_stamp: "Formatted at 2025-08-11 02:02:28 on dist-test-slave-xn5f"
I20250811 02:02:28.188441 12973 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "ef9da64cb3804f50810e390969e9689b"
format_stamp: "Formatted at 2025-08-11 02:02:28 on dist-test-slave-xn5f"
I20250811 02:02:28.199661 12973 fs_manager.cc:696] Time spent creating directory manager: real 0.010s user 0.010s sys 0.001s
I20250811 02:02:28.208379 12991 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:28.209820 12973 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.002s sys 0.002s
I20250811 02:02:28.210367 12973 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "ef9da64cb3804f50810e390969e9689b"
format_stamp: "Formatted at 2025-08-11 02:02:28 on dist-test-slave-xn5f"
I20250811 02:02:28.210872 12973 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:28.296105 12973 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:28.297618 12973 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:28.298022 12973 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:28.300741 12973 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:28.305015 12973 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:28.305287 12973 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250811 02:02:28.305549 12973 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:28.305711 12973 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:28.657474 12973 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.4:33709
I20250811 02:02:28.657550 13103 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.4:33709 every 8 connection(s)
I20250811 02:02:28.661643 12973 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250811 02:02:28.671903 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 12973
I20250811 02:02:28.672662 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250811 02:02:28.687707 13104 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
I20250811 02:02:28.688160 13104 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:28.689210 13104 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:28.691591 12515 ts_manager.cc:194] Registered new tserver with Master: ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709)
I20250811 02:02:28.693302 12515 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.4:41963
I20250811 02:02:28.695567 12468 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250811 02:02:28.740753 12515 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:52258:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250811 02:02:28.871498 12638 tablet_service.cc:1468] Processing CreateTablet for tablet 1877863cb2654e72ba81e69ee4493df0 (DEFAULT_TABLE table=TestTable [id=4d3b820537a34843b6edbeee97c82719]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:02:28.872504 12771 tablet_service.cc:1468] Processing CreateTablet for tablet 1877863cb2654e72ba81e69ee4493df0 (DEFAULT_TABLE table=TestTable [id=4d3b820537a34843b6edbeee97c82719]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:02:28.873701 12638 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1877863cb2654e72ba81e69ee4493df0. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:28.874089 12771 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1877863cb2654e72ba81e69ee4493df0. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:28.879544 13039 tablet_service.cc:1468] Processing CreateTablet for tablet 1877863cb2654e72ba81e69ee4493df0 (DEFAULT_TABLE table=TestTable [id=4d3b820537a34843b6edbeee97c82719]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:02:28.882498 13039 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1877863cb2654e72ba81e69ee4493df0. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:28.915679 13123 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: Bootstrap starting.
I20250811 02:02:28.919742 13124 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Bootstrap starting.
I20250811 02:02:28.935322 13123 tablet_bootstrap.cc:654] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:28.942283 13123 log.cc:826] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:28.942481 13124 tablet_bootstrap.cc:654] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:28.948067 13125 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: Bootstrap starting.
I20250811 02:02:28.949359 13124 log.cc:826] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:28.965538 13125 tablet_bootstrap.cc:654] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:28.966017 13123 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: No bootstrap required, opened a new log
I20250811 02:02:28.966849 13123 ts_tablet_manager.cc:1397] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: Time spent bootstrapping tablet: real 0.052s user 0.009s sys 0.028s
I20250811 02:02:28.967656 13124 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: No bootstrap required, opened a new log
I20250811 02:02:28.968441 13124 ts_tablet_manager.cc:1397] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Time spent bootstrapping tablet: real 0.050s user 0.010s sys 0.030s
I20250811 02:02:28.971433 13125 log.cc:826] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:28.983927 13125 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: No bootstrap required, opened a new log
I20250811 02:02:28.984787 13125 ts_tablet_manager.cc:1397] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: Time spent bootstrapping tablet: real 0.038s user 0.015s sys 0.021s
I20250811 02:02:29.000077 13123 raft_consensus.cc:357] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.000897 13123 raft_consensus.cc:383] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:02:29.001178 13123 raft_consensus.cc:738] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 02a3abbf0596474e9b9649fae51a44d5, State: Initialized, Role: FOLLOWER
I20250811 02:02:29.002460 13123 consensus_queue.cc:260] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.011430 13123 ts_tablet_manager.cc:1428] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: Time spent starting tablet: real 0.044s user 0.029s sys 0.011s
I20250811 02:02:29.011027 13124 raft_consensus.cc:357] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.012095 13124 raft_consensus.cc:383] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:02:29.012419 13124 raft_consensus.cc:738] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0f8bf32767014b989614cd346c852b96, State: Initialized, Role: FOLLOWER
I20250811 02:02:29.013918 13124 consensus_queue.cc:260] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.019448 13124 ts_tablet_manager.cc:1428] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Time spent starting tablet: real 0.051s user 0.029s sys 0.016s
I20250811 02:02:29.018024 13125 raft_consensus.cc:357] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.020097 13125 raft_consensus.cc:383] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:02:29.020439 13125 raft_consensus.cc:738] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ef9da64cb3804f50810e390969e9689b, State: Initialized, Role: FOLLOWER
I20250811 02:02:29.021450 13125 consensus_queue.cc:260] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.028540 13104 heartbeater.cc:499] Master 127.12.45.62:35087 was elected leader, sending a full tablet report...
I20250811 02:02:29.029913 13125 ts_tablet_manager.cc:1428] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: Time spent starting tablet: real 0.045s user 0.035s sys 0.007s
W20250811 02:02:29.103940 12704 tablet.cc:2378] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:02:29.165115 13131 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:02:29.165800 13131 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
W20250811 02:02:29.167181 13105 tablet.cc:2378] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:02:29.169176 13131 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017), 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989)
I20250811 02:02:29.178681 12658 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "ef9da64cb3804f50810e390969e9689b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "0f8bf32767014b989614cd346c852b96" is_pre_election: true
I20250811 02:02:29.179728 12658 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ef9da64cb3804f50810e390969e9689b in term 0.
I20250811 02:02:29.180577 12791 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "ef9da64cb3804f50810e390969e9689b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "02a3abbf0596474e9b9649fae51a44d5" is_pre_election: true
I20250811 02:02:29.181152 12992 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0f8bf32767014b989614cd346c852b96, ef9da64cb3804f50810e390969e9689b; no voters:
I20250811 02:02:29.181510 12791 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ef9da64cb3804f50810e390969e9689b in term 0.
I20250811 02:02:29.182214 13131 raft_consensus.cc:2802] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:02:29.182603 13131 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:02:29.182978 13131 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 0 FOLLOWER]: Advancing to term 1
W20250811 02:02:29.185096 12837 tablet.cc:2378] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:02:29.254300 13131 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.256008 13131 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [CANDIDATE]: Term 1 election: Requested vote from peers 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017), 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989)
I20250811 02:02:29.257051 12791 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "ef9da64cb3804f50810e390969e9689b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "02a3abbf0596474e9b9649fae51a44d5"
I20250811 02:02:29.257234 12658 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "ef9da64cb3804f50810e390969e9689b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "0f8bf32767014b989614cd346c852b96"
I20250811 02:02:29.257505 12791 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:29.257879 12658 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:29.351996 12791 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ef9da64cb3804f50810e390969e9689b in term 1.
I20250811 02:02:29.352104 12658 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ef9da64cb3804f50810e390969e9689b in term 1.
I20250811 02:02:29.353338 12995 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 02a3abbf0596474e9b9649fae51a44d5, ef9da64cb3804f50810e390969e9689b; no voters:
I20250811 02:02:29.354023 13131 raft_consensus.cc:2802] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:02:29.355871 13131 raft_consensus.cc:695] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [term 1 LEADER]: Becoming Leader. State: Replica: ef9da64cb3804f50810e390969e9689b, State: Running, Role: LEADER
I20250811 02:02:29.356814 13131 consensus_queue.cc:237] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:29.367071 12514 catalog_manager.cc:5582] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b reported cstate change: term changed from 0 to 1, leader changed from <none> to ef9da64cb3804f50810e390969e9689b (127.12.45.4). New cstate: current_term: 1 leader_uuid: "ef9da64cb3804f50810e390969e9689b" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } health_report { overall_health: HEALTHY } } }
I20250811 02:02:29.467092 12468 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250811 02:02:29.471385 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 0f8bf32767014b989614cd346c852b96 to finish bootstrapping
I20250811 02:02:29.485908 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 02a3abbf0596474e9b9649fae51a44d5 to finish bootstrapping
I20250811 02:02:29.498150 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ef9da64cb3804f50810e390969e9689b to finish bootstrapping
I20250811 02:02:29.511166 12468 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250811 02:02:29.515159 12468 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "1877863cb2654e72ba81e69ee4493df0"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: LEADER
}
}
ts_infos {
permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5"
rpc_addresses {
host: "127.12.45.2"
port: 34017
}
}
ts_infos {
permanent_uuid: "0f8bf32767014b989614cd346c852b96"
rpc_addresses {
host: "127.12.45.1"
port: 37989
}
}
ts_infos {
permanent_uuid: "ef9da64cb3804f50810e390969e9689b"
rpc_addresses {
host: "127.12.45.4"
port: 33709
}
}
I20250811 02:02:29.761744 13131 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [LEADER]: Connected to new peer: Peer: permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:02:29.789955 13131 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P ef9da64cb3804f50810e390969e9689b [LEADER]: Connected to new peer: Peer: permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:02:29.794332 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 12973
W20250811 02:02:29.825819 12593 connection.cc:537] server connection from 127.12.45.4:34209 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250811 02:02:29.827407 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 12482
I20250811 02:02:29.855927 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:35087
--webserver_interface=127.12.45.62
--webserver_port=40911
--builtin_ntp_servers=127.12.45.20:43319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:35087 with env {}
W20250811 02:02:30.182883 13148 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:30.183559 13148 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:30.183985 13148 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:30.219945 13148 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:02:30.220294 13148 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:30.220640 13148 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:02:30.220903 13148 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
W20250811 02:02:30.253144 12969 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:35087 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:35087: connect: Connection refused (error 111)
I20250811 02:02:30.260601 13148 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:35087
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:35087
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=40911
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:30.262166 13148 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:30.264001 13148 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:30.276618 13156 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:30.819896 12836 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:35087 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:35087: connect: Connection refused (error 111)
W20250811 02:02:30.824812 12703 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:35087 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:35087: connect: Connection refused (error 111)
I20250811 02:02:31.304064 13166 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader ef9da64cb3804f50810e390969e9689b)
I20250811 02:02:31.305116 13166 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:31.308143 13165 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader ef9da64cb3804f50810e390969e9689b)
I20250811 02:02:31.309424 13165 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:31.320102 13166 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017), ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709)
I20250811 02:02:31.358708 13165 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989), ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709)
W20250811 02:02:31.368952 12726 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111)
I20250811 02:02:31.383069 12658 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "02a3abbf0596474e9b9649fae51a44d5" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "0f8bf32767014b989614cd346c852b96" is_pre_election: true
I20250811 02:02:31.384181 12658 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 02a3abbf0596474e9b9649fae51a44d5 in term 1.
W20250811 02:02:31.388036 12593 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111)
I20250811 02:02:31.388900 12724 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 02a3abbf0596474e9b9649fae51a44d5, 0f8bf32767014b989614cd346c852b96; no voters:
I20250811 02:02:31.390426 13165 raft_consensus.cc:2802] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250811 02:02:31.391023 13165 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 1 FOLLOWER]: Starting leader election (detected failure of leader ef9da64cb3804f50810e390969e9689b)
I20250811 02:02:31.391594 13165 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 1 FOLLOWER]: Advancing to term 2
W20250811 02:02:31.415899 12726 leader_election.cc:336] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709): Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111)
I20250811 02:02:31.433610 12791 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "0f8bf32767014b989614cd346c852b96" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "02a3abbf0596474e9b9649fae51a44d5" is_pre_election: true
W20250811 02:02:31.434607 12593 outbound_call.cc:321] RPC callback for RPC call kudu.consensus.ConsensusService.RequestConsensusVote -> {remote=127.12.45.4:33709, user_credentials={real_user=slave}} blocked reactor thread for 46819.9us
W20250811 02:02:31.439474 12593 leader_election.cc:336] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709): Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111)
I20250811 02:02:31.455067 13165 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:31.457110 12791 raft_consensus.cc:2391] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 0f8bf32767014b989614cd346c852b96 in current term 2: Already voted for candidate 02a3abbf0596474e9b9649fae51a44d5 in this term.
I20250811 02:02:31.462909 12594 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 0f8bf32767014b989614cd346c852b96; no voters: 02a3abbf0596474e9b9649fae51a44d5, ef9da64cb3804f50810e390969e9689b
I20250811 02:02:31.465109 13166 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:02:31.480679 12658 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "02a3abbf0596474e9b9649fae51a44d5" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "0f8bf32767014b989614cd346c852b96"
W20250811 02:02:31.490721 12726 leader_election.cc:336] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709): Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111)
I20250811 02:02:31.491861 13165 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 2 election: Requested vote from peers 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989), ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709)
I20250811 02:02:31.556941 13166 raft_consensus.cc:2747] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250811 02:02:31.651614 12658 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 02a3abbf0596474e9b9649fae51a44d5 in term 2.
I20250811 02:02:31.656963 12724 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 02a3abbf0596474e9b9649fae51a44d5, 0f8bf32767014b989614cd346c852b96; no voters: ef9da64cb3804f50810e390969e9689b
I20250811 02:02:31.658150 13165 raft_consensus.cc:2802] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:02:31.669932 13165 raft_consensus.cc:695] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 LEADER]: Becoming Leader. State: Replica: 02a3abbf0596474e9b9649fae51a44d5, State: Running, Role: LEADER
I20250811 02:02:31.673816 13165 consensus_queue.cc:237] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
W20250811 02:02:31.680256 13155 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 13148
W20250811 02:02:31.785080 13148 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.507s user 0.500s sys 0.991s
W20250811 02:02:31.785584 13148 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.508s user 0.500s sys 0.991s
W20250811 02:02:30.277925 13157 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:31.787405 13159 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:31.790127 13158 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1508 milliseconds
I20250811 02:02:31.790160 13148 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:31.791538 13148 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:31.794266 13148 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:31.795652 13148 hybrid_clock.cc:648] HybridClock initialized: now 1754877751795572 us; error 80 us; skew 500 ppm
I20250811 02:02:31.796497 13148 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:31.803133 13148 webserver.cc:489] Webserver started at http://127.12.45.62:40911/ using document root <none> and password file <none>
I20250811 02:02:31.804162 13148 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:31.804404 13148 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:31.813268 13148 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250811 02:02:31.818063 13179 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:31.819255 13148 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250811 02:02:31.819643 13148 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c"
format_stamp: "Formatted at 2025-08-11 02:02:19 on dist-test-slave-xn5f"
I20250811 02:02:31.821729 13148 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:31.881139 13148 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:31.882836 13148 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:31.883395 13148 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:31.969856 13148 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:35087
I20250811 02:02:31.969897 13230 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:35087 every 8 connection(s)
I20250811 02:02:31.973063 13148 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:02:31.978206 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 13148
I20250811 02:02:31.978900 12468 kudu-admin-test.cc:735] Forcing unsafe config change on tserver 0f8bf32767014b989614cd346c852b96
I20250811 02:02:31.987715 13231 sys_catalog.cc:263] Verifying existing consensus state
I20250811 02:02:31.992954 13231 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Bootstrap starting.
I20250811 02:02:32.032119 13231 log.cc:826] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:32.056118 13231 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:02:32.057271 13231 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Bootstrap complete.
I20250811 02:02:32.080145 13231 raft_consensus.cc:357] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:32.082311 13231 raft_consensus.cc:738] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: cc9b8991ee9e4ed9b58e7e606e014e9c, State: Initialized, Role: FOLLOWER
I20250811 02:02:32.083285 13231 consensus_queue.cc:260] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:32.083818 13231 raft_consensus.cc:397] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:02:32.084115 13231 raft_consensus.cc:491] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:02:32.084455 13231 raft_consensus.cc:3058] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:02:32.136886 12658 raft_consensus.cc:1273] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 2 FOLLOWER]: Refusing update from remote peer 02a3abbf0596474e9b9649fae51a44d5: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 02:02:32.138275 13165 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250811 02:02:32.162017 13231 raft_consensus.cc:513] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:32.163182 13231 leader_election.cc:304] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: cc9b8991ee9e4ed9b58e7e606e014e9c; no voters:
W20250811 02:02:32.165030 12726 consensus_peers.cc:489] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 -> Peer ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709): Couldn't send request to peer ef9da64cb3804f50810e390969e9689b. Status: Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 02:02:32.165758 13231 leader_election.cc:290] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 02:02:32.166240 13241 raft_consensus.cc:2802] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:02:32.181643 13241 raft_consensus.cc:695] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [term 2 LEADER]: Becoming Leader. State: Replica: cc9b8991ee9e4ed9b58e7e606e014e9c, State: Running, Role: LEADER
I20250811 02:02:32.182476 13231 sys_catalog.cc:564] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:02:32.182720 13241 consensus_queue.cc:237] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } }
I20250811 02:02:32.202872 12703 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
I20250811 02:02:32.214432 13243 sys_catalog.cc:455] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: SysCatalogTable state changed. Reason: New leader cc9b8991ee9e4ed9b58e7e606e014e9c. Latest consensus state: current_term: 2 leader_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } } }
I20250811 02:02:32.215342 13243 sys_catalog.cc:458] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:32.217926 12836 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
I20250811 02:02:32.216975 13242 sys_catalog.cc:455] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cc9b8991ee9e4ed9b58e7e606e014e9c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 35087 } } }
I20250811 02:02:32.219422 13242 sys_catalog.cc:458] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:32.224388 13249 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:02:32.242192 13249 catalog_manager.cc:671] Loaded metadata for table TestTable [id=4d3b820537a34843b6edbeee97c82719]
I20250811 02:02:32.252779 13249 tablet_loader.cc:96] loaded metadata for tablet 1877863cb2654e72ba81e69ee4493df0 (table TestTable [id=4d3b820537a34843b6edbeee97c82719])
I20250811 02:02:32.255224 13249 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:02:32.261529 13249 catalog_manager.cc:1261] Loaded cluster ID: d159ee3097354405870411b9ca756157
I20250811 02:02:32.261969 13249 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:02:32.273061 13249 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:02:32.289989 13249 catalog_manager.cc:5966] T 00000000000000000000000000000000 P cc9b8991ee9e4ed9b58e7e606e014e9c: Loaded TSK: 0
I20250811 02:02:32.301589 13249 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:02:32.323446 12969 heartbeater.cc:344] Connected to a master server at 127.12.45.62:35087
W20250811 02:02:32.455826 13233 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:32.456775 13233 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:32.509711 13233 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250811 02:02:33.227608 13196 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" instance_seqno: 1754877743885160) as {username='slave'} at 127.12.45.2:49171; Asking this server to re-register.
I20250811 02:02:33.229522 13195 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "0f8bf32767014b989614cd346c852b96" instance_seqno: 1754877742021557) as {username='slave'} at 127.12.45.1:42007; Asking this server to re-register.
I20250811 02:02:33.230116 12836 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:33.231212 12703 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:33.231240 12836 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:33.231920 12703 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:33.236074 13194 ts_manager.cc:194] Registered new tserver with Master: 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989)
I20250811 02:02:33.239012 13196 ts_manager.cc:194] Registered new tserver with Master: 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017)
I20250811 02:02:33.242619 13194 catalog_manager.cc:5582] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 reported cstate change: term changed from 1 to 2, leader changed from ef9da64cb3804f50810e390969e9689b (127.12.45.4) to 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2). New cstate: current_term: 2 leader_uuid: "02a3abbf0596474e9b9649fae51a44d5" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } } }
I20250811 02:02:33.329978 13196 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "a5781ba736944d26b963e09d47365c0a" instance_seqno: 1754877746056472) as {username='slave'} at 127.12.45.3:36617; Asking this server to re-register.
I20250811 02:02:33.332206 12969 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:33.333127 12969 heartbeater.cc:507] Master 127.12.45.62:35087 requested a full tablet report, sending...
I20250811 02:02:33.336265 13196 ts_manager.cc:194] Registered new tserver with Master: a5781ba736944d26b963e09d47365c0a (127.12.45.3:36639)
W20250811 02:02:33.959373 13266 debug-util.cc:398] Leaking SignalData structure 0x7b08000347a0 after lost signal to thread 13233
W20250811 02:02:33.959925 13266 kernel_stack_watchdog.cc:198] Thread 13233 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:02:34.205309 13233 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.649s user 0.570s sys 1.052s
W20250811 02:02:34.339530 13233 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.784s user 0.574s sys 1.065s
I20250811 02:02:34.385233 12658 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "0f8bf32767014b989614cd346c852b96"
tablet_id: "1877863cb2654e72ba81e69ee4493df0"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5"
}
peers {
permanent_uuid: "0f8bf32767014b989614cd346c852b96"
}
}
from {username='slave'} at 127.0.0.1:39128
W20250811 02:02:34.386453 12658 raft_consensus.cc:2216] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 2 FOLLOWER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true
I20250811 02:02:34.387373 12658 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 2 FOLLOWER]: Advancing to term 3
I20250811 02:02:34.421986 12658 raft_consensus.cc:1238] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 3 FOLLOWER]: Rejecting Update request from peer 02a3abbf0596474e9b9649fae51a44d5 for earlier term 2. Current term is 3. Ops: []
I20250811 02:02:34.423223 13165 consensus_queue.cc:1046] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [LEADER]: Peer responded invalid term: Peer: permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 }, Status: INVALID_TERM, Last received: 2.2, Next index: 3, Last known committed idx: 2, Time since last communication: 0.001s
I20250811 02:02:34.424856 13290 raft_consensus.cc:3053] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 LEADER]: Stepping down as leader of term 2
I20250811 02:02:34.425154 13290 raft_consensus.cc:738] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 LEADER]: Becoming Follower/Learner. State: Replica: 02a3abbf0596474e9b9649fae51a44d5, State: Running, Role: LEADER
I20250811 02:02:34.425767 13290 consensus_queue.cc:260] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:34.426741 13290 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 2 FOLLOWER]: Advancing to term 3
W20250811 02:02:35.161504 13266 debug-util.cc:398] Leaking SignalData structure 0x7b08000379e0 after lost signal to thread 13233
I20250811 02:02:35.827564 13293 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 3 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:02:35.827951 13293 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } }
I20250811 02:02:35.829339 13293 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989), ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709)
I20250811 02:02:35.830287 12658 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "02a3abbf0596474e9b9649fae51a44d5" candidate_term: 4 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "0f8bf32767014b989614cd346c852b96" is_pre_election: true
W20250811 02:02:35.833925 12726 leader_election.cc:336] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 4 pre-election: RPC error from VoteRequest() call to peer ef9da64cb3804f50810e390969e9689b (127.12.45.4:33709): Network error: Client connection negotiation failed: client connection to 127.12.45.4:33709: connect: Connection refused (error 111)
I20250811 02:02:35.834312 12726 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 02a3abbf0596474e9b9649fae51a44d5; no voters: 0f8bf32767014b989614cd346c852b96, ef9da64cb3804f50810e390969e9689b
I20250811 02:02:35.834985 13293 raft_consensus.cc:2747] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 3 FOLLOWER]: Leader pre-election lost for term 4. Reason: could not achieve majority
I20250811 02:02:35.894395 13296 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250811 02:02:35.894789 13296 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true
I20250811 02:02:35.895838 13296 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017)
I20250811 02:02:35.897087 12791 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "0f8bf32767014b989614cd346c852b96" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "02a3abbf0596474e9b9649fae51a44d5" is_pre_election: true
I20250811 02:02:35.897578 12791 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0f8bf32767014b989614cd346c852b96 in term 3.
I20250811 02:02:35.898478 12594 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 02a3abbf0596474e9b9649fae51a44d5, 0f8bf32767014b989614cd346c852b96; no voters:
I20250811 02:02:35.899047 13296 raft_consensus.cc:2802] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250811 02:02:35.899308 13296 raft_consensus.cc:491] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250811 02:02:35.899541 13296 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 3 FOLLOWER]: Advancing to term 4
I20250811 02:02:35.903517 13296 raft_consensus.cc:513] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true
I20250811 02:02:35.904428 13296 leader_election.cc:290] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 4 election: Requested vote from peers 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2:34017)
I20250811 02:02:35.905294 12791 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1877863cb2654e72ba81e69ee4493df0" candidate_uuid: "0f8bf32767014b989614cd346c852b96" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "02a3abbf0596474e9b9649fae51a44d5"
I20250811 02:02:35.905647 12791 raft_consensus.cc:3058] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 3 FOLLOWER]: Advancing to term 4
I20250811 02:02:35.909667 12791 raft_consensus.cc:2466] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0f8bf32767014b989614cd346c852b96 in term 4.
I20250811 02:02:35.910413 12594 leader_election.cc:304] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 02a3abbf0596474e9b9649fae51a44d5, 0f8bf32767014b989614cd346c852b96; no voters:
I20250811 02:02:35.910967 13296 raft_consensus.cc:2802] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 4 FOLLOWER]: Leader election won for term 4
I20250811 02:02:35.911707 13296 raft_consensus.cc:695] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 4 LEADER]: Becoming Leader. State: Replica: 0f8bf32767014b989614cd346c852b96, State: Running, Role: LEADER
I20250811 02:02:35.912493 13296 consensus_queue.cc:237] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true
I20250811 02:02:35.918088 13194 catalog_manager.cc:5582] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 reported cstate change: term changed from 2 to 4, leader changed from 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2) to 0f8bf32767014b989614cd346c852b96 (127.12.45.1), now has a pending config: VOTER 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2), VOTER 0f8bf32767014b989614cd346c852b96 (127.12.45.1). New cstate: current_term: 4 leader_uuid: "0f8bf32767014b989614cd346c852b96" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ef9da64cb3804f50810e390969e9689b" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 33709 } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true }
I20250811 02:02:36.322965 12791 raft_consensus.cc:1273] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Refusing update from remote peer 0f8bf32767014b989614cd346c852b96: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250811 02:02:36.324131 13296 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Connected to new peer: Peer: permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250811 02:02:36.330966 13297 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER ef9da64cb3804f50810e390969e9689b (127.12.45.4) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true }
I20250811 02:02:36.332073 12791 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER ef9da64cb3804f50810e390969e9689b (127.12.45.4) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } unsafe_config_change: true }
I20250811 02:02:36.341830 13194 catalog_manager.cc:5582] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 reported cstate change: config changed from index -1 to 3, VOTER ef9da64cb3804f50810e390969e9689b (127.12.45.4) evicted, no longer has a pending config: VOTER 02a3abbf0596474e9b9649fae51a44d5 (127.12.45.2), VOTER 0f8bf32767014b989614cd346c852b96 (127.12.45.1). New cstate: current_term: 4 leader_uuid: "0f8bf32767014b989614cd346c852b96" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
W20250811 02:02:36.349043 13194 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 1877863cb2654e72ba81e69ee4493df0 on TS ef9da64cb3804f50810e390969e9689b: Not found: failed to reset TS proxy: Could not find TS for UUID ef9da64cb3804f50810e390969e9689b
I20250811 02:02:36.368336 12658 consensus_queue.cc:237] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: true } } unsafe_config_change: true
I20250811 02:02:36.374852 12791 raft_consensus.cc:1273] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Refusing update from remote peer 0f8bf32767014b989614cd346c852b96: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250811 02:02:36.376114 13297 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Connected to new peer: Peer: permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250811 02:02:36.381896 13296 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER a5781ba736944d26b963e09d47365c0a (127.12.45.3) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: true } } unsafe_config_change: true }
I20250811 02:02:36.383327 12791 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER a5781ba736944d26b963e09d47365c0a (127.12.45.3) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: true } } unsafe_config_change: true }
W20250811 02:02:36.384138 12594 consensus_peers.cc:489] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 -> Peer a5781ba736944d26b963e09d47365c0a (127.12.45.3:36639): Couldn't send request to peer a5781ba736944d26b963e09d47365c0a. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 1877863cb2654e72ba81e69ee4493df0. This is attempt 1: this message will repeat every 5th retry.
I20250811 02:02:36.388481 13180 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 1877863cb2654e72ba81e69ee4493df0 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250811 02:02:36.391368 13196 catalog_manager.cc:5582] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 reported cstate change: config changed from index 3 to 5, NON_VOTER a5781ba736944d26b963e09d47365c0a (127.12.45.3) added. New cstate: current_term: 4 leader_uuid: "0f8bf32767014b989614cd346c852b96" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: true } health_report { overall_health: UNKNOWN } } unsafe_config_change: true }
W20250811 02:02:36.401574 13181 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 1877863cb2654e72ba81e69ee4493df0 on TS ef9da64cb3804f50810e390969e9689b failed: Not found: failed to reset TS proxy: Could not find TS for UUID ef9da64cb3804f50810e390969e9689b
I20250811 02:02:36.872787 13311 ts_tablet_manager.cc:927] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Initiating tablet copy from peer 0f8bf32767014b989614cd346c852b96 (127.12.45.1:37989)
I20250811 02:02:36.875492 13311 tablet_copy_client.cc:323] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: tablet copy: Beginning tablet copy session from remote peer at address 127.12.45.1:37989
I20250811 02:02:36.886466 12678 tablet_copy_service.cc:140] P 0f8bf32767014b989614cd346c852b96: Received BeginTabletCopySession request for tablet 1877863cb2654e72ba81e69ee4493df0 from peer a5781ba736944d26b963e09d47365c0a ({username='slave'} at 127.12.45.3:56205)
I20250811 02:02:36.886914 12678 tablet_copy_service.cc:161] P 0f8bf32767014b989614cd346c852b96: Beginning new tablet copy session on tablet 1877863cb2654e72ba81e69ee4493df0 from peer a5781ba736944d26b963e09d47365c0a at {username='slave'} at 127.12.45.3:56205: session id = a5781ba736944d26b963e09d47365c0a-1877863cb2654e72ba81e69ee4493df0
I20250811 02:02:36.892035 12678 tablet_copy_source_session.cc:215] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 02:02:36.897044 13311 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1877863cb2654e72ba81e69ee4493df0. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:36.915086 13311 tablet_copy_client.cc:806] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: tablet copy: Starting download of 0 data blocks...
I20250811 02:02:36.915647 13311 tablet_copy_client.cc:670] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: tablet copy: Starting download of 1 WAL segments...
I20250811 02:02:36.919000 13311 tablet_copy_client.cc:538] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 02:02:36.924504 13311 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Bootstrap starting.
I20250811 02:02:36.936278 13311 log.cc:826] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:36.946604 13311 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:02:36.947378 13311 tablet_bootstrap.cc:492] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Bootstrap complete.
I20250811 02:02:36.947949 13311 ts_tablet_manager.cc:1397] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Time spent bootstrapping tablet: real 0.024s user 0.019s sys 0.004s
I20250811 02:02:36.965467 13311 raft_consensus.cc:357] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: true } } unsafe_config_change: true
I20250811 02:02:36.966374 13311 raft_consensus.cc:738] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: a5781ba736944d26b963e09d47365c0a, State: Initialized, Role: LEARNER
I20250811 02:02:36.967028 13311 consensus_queue.cc:260] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: true } } unsafe_config_change: true
I20250811 02:02:36.970103 13311 ts_tablet_manager.cc:1428] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a: Time spent starting tablet: real 0.022s user 0.021s sys 0.001s
I20250811 02:02:36.971742 12678 tablet_copy_service.cc:342] P 0f8bf32767014b989614cd346c852b96: Request end of tablet copy session a5781ba736944d26b963e09d47365c0a-1877863cb2654e72ba81e69ee4493df0 received from {username='slave'} at 127.12.45.3:56205
I20250811 02:02:36.972172 12678 tablet_copy_service.cc:434] P 0f8bf32767014b989614cd346c852b96: ending tablet copy session a5781ba736944d26b963e09d47365c0a-1877863cb2654e72ba81e69ee4493df0 on tablet 1877863cb2654e72ba81e69ee4493df0 with peer a5781ba736944d26b963e09d47365c0a
I20250811 02:02:37.492478 12924 raft_consensus.cc:1215] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250811 02:02:37.567660 13181 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 1877863cb2654e72ba81e69ee4493df0 on TS ef9da64cb3804f50810e390969e9689b failed: Not found: failed to reset TS proxy: Could not find TS for UUID ef9da64cb3804f50810e390969e9689b
I20250811 02:02:37.900224 13317 raft_consensus.cc:1062] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96: attempting to promote NON_VOTER a5781ba736944d26b963e09d47365c0a to VOTER
I20250811 02:02:37.901783 13317 consensus_queue.cc:237] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: false } } unsafe_config_change: true
I20250811 02:02:37.906235 12924 raft_consensus.cc:1273] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a [term 4 LEARNER]: Refusing update from remote peer 0f8bf32767014b989614cd346c852b96: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250811 02:02:37.906301 12791 raft_consensus.cc:1273] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Refusing update from remote peer 0f8bf32767014b989614cd346c852b96: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250811 02:02:37.907495 13318 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250811 02:02:37.908290 13319 consensus_queue.cc:1035] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [LEADER]: Connected to new peer: Peer: permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250811 02:02:37.914105 13318 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, a5781ba736944d26b963e09d47365c0a (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: false } } unsafe_config_change: true }
I20250811 02:02:37.915481 12791 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P 02a3abbf0596474e9b9649fae51a44d5 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, a5781ba736944d26b963e09d47365c0a (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: false } } unsafe_config_change: true }
I20250811 02:02:37.917163 12924 raft_consensus.cc:2953] T 1877863cb2654e72ba81e69ee4493df0 P a5781ba736944d26b963e09d47365c0a [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, a5781ba736944d26b963e09d47365c0a (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: false } } unsafe_config_change: true }
I20250811 02:02:37.924818 13196 catalog_manager.cc:5582] T 1877863cb2654e72ba81e69ee4493df0 P 0f8bf32767014b989614cd346c852b96 reported cstate change: config changed from index 5 to 6, a5781ba736944d26b963e09d47365c0a (127.12.45.3) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "0f8bf32767014b989614cd346c852b96" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 34017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "0f8bf32767014b989614cd346c852b96" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37989 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a5781ba736944d26b963e09d47365c0a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36639 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250811 02:02:37.930707 12468 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250811 02:02:37.968012 12468 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "1877863cb2654e72ba81e69ee4493df0"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "02a3abbf0596474e9b9649fae51a44d5"
rpc_addresses {
host: "127.12.45.2"
port: 34017
}
}
ts_infos {
permanent_uuid: "0f8bf32767014b989614cd346c852b96"
rpc_addresses {
host: "127.12.45.1"
port: 37989
}
}
ts_infos {
permanent_uuid: "a5781ba736944d26b963e09d47365c0a"
rpc_addresses {
host: "127.12.45.3"
port: 36639
}
}
I20250811 02:02:37.970916 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 12574
I20250811 02:02:38.007627 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 12707
I20250811 02:02:38.033550 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 12840
I20250811 02:02:38.061108 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 13148
2025-08-11T02:02:38Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (21661 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250811 02:02:38.123379 12468 test_util.cc:276] Using random seed: 1365618001
I20250811 02:02:38.129202 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:02:38.129352 12468 ts_itest-base.cc:116] --------------
I20250811 02:02:38.129467 12468 ts_itest-base.cc:117] 3 tablet servers
I20250811 02:02:38.129565 12468 ts_itest-base.cc:118] 3 replicas per TS
I20250811 02:02:38.129659 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:02:38Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:02:38Z Disabled control of system clock
I20250811 02:02:38.164487 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:34141
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:32789
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:34141
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250811 02:02:38.466671 13338 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:38.467286 13338 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:38.467711 13338 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:38.498224 13338 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:02:38.498512 13338 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:38.498709 13338 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:02:38.498901 13338 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:02:38.533910 13338 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:32789
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:34141
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:34141
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:38.535574 13338 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:38.537271 13338 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:38.548418 13344 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:38.548864 13345 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:39.846527 13347 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:39.849026 13338 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.301s user 0.443s sys 0.844s
W20250811 02:02:39.849632 13338 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.302s user 0.444s sys 0.846s
W20250811 02:02:39.851512 13346 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1296 milliseconds
I20250811 02:02:39.851559 13338 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:39.852881 13338 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:39.855705 13338 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:39.857046 13338 hybrid_clock.cc:648] HybridClock initialized: now 1754877759857021 us; error 46 us; skew 500 ppm
I20250811 02:02:39.857859 13338 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:39.865669 13338 webserver.cc:489] Webserver started at http://127.12.45.62:35169/ using document root <none> and password file <none>
I20250811 02:02:39.866621 13338 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:39.866847 13338 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:39.867326 13338 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:39.871817 13338 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "5a6868e00b1e4913affa46d0b69009f1"
format_stamp: "Formatted at 2025-08-11 02:02:39 on dist-test-slave-xn5f"
I20250811 02:02:39.872896 13338 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "5a6868e00b1e4913affa46d0b69009f1"
format_stamp: "Formatted at 2025-08-11 02:02:39 on dist-test-slave-xn5f"
I20250811 02:02:39.880807 13338 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250811 02:02:39.886900 13354 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:39.888124 13338 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 02:02:39.888481 13338 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "5a6868e00b1e4913affa46d0b69009f1"
format_stamp: "Formatted at 2025-08-11 02:02:39 on dist-test-slave-xn5f"
I20250811 02:02:39.888833 13338 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:39.958123 13338 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:39.959638 13338 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:39.960093 13338 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:40.054450 13338 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:34141
I20250811 02:02:40.054598 13406 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:34141 every 8 connection(s)
I20250811 02:02:40.057271 13338 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:02:40.064260 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 13338
I20250811 02:02:40.064314 13407 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:40.064671 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:02:40.088016 13407 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1: Bootstrap starting.
I20250811 02:02:40.093729 13407 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:40.095830 13407 log.cc:826] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:40.100459 13407 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1: No bootstrap required, opened a new log
I20250811 02:02:40.118156 13407 raft_consensus.cc:357] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a6868e00b1e4913affa46d0b69009f1" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 34141 } }
I20250811 02:02:40.118867 13407 raft_consensus.cc:383] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:02:40.119118 13407 raft_consensus.cc:738] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5a6868e00b1e4913affa46d0b69009f1, State: Initialized, Role: FOLLOWER
I20250811 02:02:40.119777 13407 consensus_queue.cc:260] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a6868e00b1e4913affa46d0b69009f1" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 34141 } }
I20250811 02:02:40.120285 13407 raft_consensus.cc:397] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:02:40.120533 13407 raft_consensus.cc:491] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:02:40.120803 13407 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:40.124930 13407 raft_consensus.cc:513] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a6868e00b1e4913affa46d0b69009f1" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 34141 } }
I20250811 02:02:40.125626 13407 leader_election.cc:304] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5a6868e00b1e4913affa46d0b69009f1; no voters:
I20250811 02:02:40.127413 13407 leader_election.cc:290] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:02:40.128140 13412 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:02:40.130334 13412 raft_consensus.cc:695] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [term 1 LEADER]: Becoming Leader. State: Replica: 5a6868e00b1e4913affa46d0b69009f1, State: Running, Role: LEADER
I20250811 02:02:40.131121 13412 consensus_queue.cc:237] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a6868e00b1e4913affa46d0b69009f1" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 34141 } }
I20250811 02:02:40.132210 13407 sys_catalog.cc:564] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:02:40.138221 13413 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "5a6868e00b1e4913affa46d0b69009f1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a6868e00b1e4913affa46d0b69009f1" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 34141 } } }
I20250811 02:02:40.138091 13414 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 5a6868e00b1e4913affa46d0b69009f1. Latest consensus state: current_term: 1 leader_uuid: "5a6868e00b1e4913affa46d0b69009f1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a6868e00b1e4913affa46d0b69009f1" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 34141 } } }
I20250811 02:02:40.139160 13414 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:40.139154 13413 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1 [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:40.144279 13419 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:02:40.159082 13419 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:02:40.178042 13419 catalog_manager.cc:1349] Generated new cluster ID: 95c337ee43e149a28f1307b61c75dac3
I20250811 02:02:40.178339 13419 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:02:40.198619 13419 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:02:40.200115 13419 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:02:40.218715 13419 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 5a6868e00b1e4913affa46d0b69009f1: Generated new TSK 0
I20250811 02:02:40.219588 13419 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:02:40.232888 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:34141
--builtin_ntp_servers=127.12.45.20:32789
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250811 02:02:40.540664 13431 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250811 02:02:40.541361 13431 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:40.541618 13431 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:40.542091 13431 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:40.576388 13431 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:40.577276 13431 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:02:40.613039 13431 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:32789
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:34141
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:40.614562 13431 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:40.616302 13431 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:40.630367 13437 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:40.632833 13438 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:41.948932 13439 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
W20250811 02:02:41.954916 13440 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:41.958182 13431 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.328s user 0.504s sys 0.816s
W20250811 02:02:41.958545 13431 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.329s user 0.504s sys 0.816s
I20250811 02:02:41.958825 13431 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:41.960235 13431 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:41.963203 13431 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:41.964699 13431 hybrid_clock.cc:648] HybridClock initialized: now 1754877761964603 us; error 105 us; skew 500 ppm
I20250811 02:02:41.965864 13431 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:41.975173 13431 webserver.cc:489] Webserver started at http://127.12.45.1:38529/ using document root <none> and password file <none>
I20250811 02:02:41.976493 13431 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:41.976784 13431 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:41.977375 13431 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:41.984097 13431 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "65e94c55878d4c82b3f4247dc377c5eb"
format_stamp: "Formatted at 2025-08-11 02:02:41 on dist-test-slave-xn5f"
I20250811 02:02:41.985670 13431 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "65e94c55878d4c82b3f4247dc377c5eb"
format_stamp: "Formatted at 2025-08-11 02:02:41 on dist-test-slave-xn5f"
I20250811 02:02:41.996109 13431 fs_manager.cc:696] Time spent creating directory manager: real 0.010s user 0.010s sys 0.002s
I20250811 02:02:42.004146 13447 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:42.005532 13431 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.001s sys 0.003s
I20250811 02:02:42.005936 13431 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "65e94c55878d4c82b3f4247dc377c5eb"
format_stamp: "Formatted at 2025-08-11 02:02:41 on dist-test-slave-xn5f"
I20250811 02:02:42.006408 13431 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:42.079458 13431 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:42.080943 13431 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:42.081367 13431 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:42.084074 13431 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:42.088497 13431 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:42.088721 13431 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:42.088974 13431 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:42.089119 13431 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:42.259727 13431 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:35143
I20250811 02:02:42.259863 13559 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:35143 every 8 connection(s)
I20250811 02:02:42.262609 13431 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:02:42.267735 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 13431
I20250811 02:02:42.268115 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:02:42.276182 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:34141
--builtin_ntp_servers=127.12.45.20:32789
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250811 02:02:42.288816 13560 heartbeater.cc:344] Connected to a master server at 127.12.45.62:34141
I20250811 02:02:42.289347 13560 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:42.290681 13560 heartbeater.cc:507] Master 127.12.45.62:34141 requested a full tablet report, sending...
I20250811 02:02:42.293956 13371 ts_manager.cc:194] Registered new tserver with Master: 65e94c55878d4c82b3f4247dc377c5eb (127.12.45.1:35143)
I20250811 02:02:42.297178 13371 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:50639
W20250811 02:02:42.587586 13564 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250811 02:02:42.588248 13564 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:42.588519 13564 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:42.588995 13564 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:42.620330 13564 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:42.621210 13564 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:02:42.655663 13564 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:32789
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:34141
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:42.657044 13564 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:42.658715 13564 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:42.671742 13570 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:43.301836 13560 heartbeater.cc:499] Master 127.12.45.62:34141 was elected leader, sending a full tablet report...
W20250811 02:02:44.074967 13569 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 13564
W20250811 02:02:44.409772 13569 kernel_stack_watchdog.cc:198] Thread 13564 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:02:42.672245 13571 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:44.410534 13564 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.739s user 0.576s sys 1.162s
W20250811 02:02:44.411093 13564 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.740s user 0.576s sys 1.163s
W20250811 02:02:44.411496 13572 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1738 milliseconds
W20250811 02:02:44.412731 13573 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:44.413185 13564 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:44.415939 13564 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:44.418138 13564 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:44.419485 13564 hybrid_clock.cc:648] HybridClock initialized: now 1754877764419444 us; error 54 us; skew 500 ppm
I20250811 02:02:44.420245 13564 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:44.426385 13564 webserver.cc:489] Webserver started at http://127.12.45.2:39141/ using document root <none> and password file <none>
I20250811 02:02:44.427412 13564 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:44.427668 13564 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:44.428103 13564 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:44.432552 13564 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "f49a6890b1fd4a2b8004633ebaad6367"
format_stamp: "Formatted at 2025-08-11 02:02:44 on dist-test-slave-xn5f"
I20250811 02:02:44.433799 13564 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "f49a6890b1fd4a2b8004633ebaad6367"
format_stamp: "Formatted at 2025-08-11 02:02:44 on dist-test-slave-xn5f"
I20250811 02:02:44.441217 13564 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.005s
I20250811 02:02:44.446761 13580 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:44.447896 13564 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250811 02:02:44.448243 13564 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "f49a6890b1fd4a2b8004633ebaad6367"
format_stamp: "Formatted at 2025-08-11 02:02:44 on dist-test-slave-xn5f"
I20250811 02:02:44.448547 13564 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:44.517627 13564 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:44.519196 13564 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:44.519652 13564 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:44.522269 13564 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:44.526615 13564 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:44.526819 13564 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:44.527050 13564 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:44.527189 13564 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:44.661794 13564 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:33753
I20250811 02:02:44.661896 13692 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:33753 every 8 connection(s)
I20250811 02:02:44.664577 13564 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:02:44.673849 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 13564
I20250811 02:02:44.674228 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 02:02:44.680325 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:34141
--builtin_ntp_servers=127.12.45.20:32789
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250811 02:02:44.686759 13693 heartbeater.cc:344] Connected to a master server at 127.12.45.62:34141
I20250811 02:02:44.687259 13693 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:44.688531 13693 heartbeater.cc:507] Master 127.12.45.62:34141 requested a full tablet report, sending...
I20250811 02:02:44.691160 13371 ts_manager.cc:194] Registered new tserver with Master: f49a6890b1fd4a2b8004633ebaad6367 (127.12.45.2:33753)
I20250811 02:02:44.692427 13371 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:53581
W20250811 02:02:44.985842 13697 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250811 02:02:44.986481 13697 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:44.986732 13697 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:44.987237 13697 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:45.018231 13697 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:45.019137 13697 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:02:45.054122 13697 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:32789
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:34141
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:45.055603 13697 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:45.057220 13697 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:45.069464 13703 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:45.696250 13693 heartbeater.cc:499] Master 127.12.45.62:34141 was elected leader, sending a full tablet report...
W20250811 02:02:45.070569 13704 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:46.445078 13705 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1369 milliseconds
W20250811 02:02:46.447152 13706 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:46.449545 13697 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.379s user 0.001s sys 0.007s
W20250811 02:02:46.449832 13697 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.379s user 0.001s sys 0.007s
I20250811 02:02:46.450057 13697 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:46.451311 13697 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:46.453994 13697 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:46.455475 13697 hybrid_clock.cc:648] HybridClock initialized: now 1754877766455435 us; error 51 us; skew 500 ppm
I20250811 02:02:46.456307 13697 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:46.464741 13697 webserver.cc:489] Webserver started at http://127.12.45.3:42611/ using document root <none> and password file <none>
I20250811 02:02:46.465785 13697 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:46.465993 13697 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:46.466481 13697 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:46.470963 13697 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "5eacc56fba70430a83f2215fc82de53a"
format_stamp: "Formatted at 2025-08-11 02:02:46 on dist-test-slave-xn5f"
I20250811 02:02:46.472113 13697 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "5eacc56fba70430a83f2215fc82de53a"
format_stamp: "Formatted at 2025-08-11 02:02:46 on dist-test-slave-xn5f"
I20250811 02:02:46.480388 13697 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.004s sys 0.004s
I20250811 02:02:46.486799 13713 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:46.487973 13697 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 02:02:46.488283 13697 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "5eacc56fba70430a83f2215fc82de53a"
format_stamp: "Formatted at 2025-08-11 02:02:46 on dist-test-slave-xn5f"
I20250811 02:02:46.488617 13697 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:46.556977 13697 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:46.558501 13697 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:46.558974 13697 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:46.561564 13697 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:46.565744 13697 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:46.565963 13697 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:46.566193 13697 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:46.566377 13697 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:46.707530 13697 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:38345
I20250811 02:02:46.707625 13826 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:38345 every 8 connection(s)
I20250811 02:02:46.710098 13697 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:02:46.715341 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 13697
I20250811 02:02:46.716010 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 02:02:46.741225 13827 heartbeater.cc:344] Connected to a master server at 127.12.45.62:34141
I20250811 02:02:46.741732 13827 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:46.742808 13827 heartbeater.cc:507] Master 127.12.45.62:34141 requested a full tablet report, sending...
I20250811 02:02:46.745026 13371 ts_manager.cc:194] Registered new tserver with Master: 5eacc56fba70430a83f2215fc82de53a (127.12.45.3:38345)
I20250811 02:02:46.746292 13371 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:60583
I20250811 02:02:46.754558 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:02:46.791065 13371 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:40112:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250811 02:02:46.810500 13371 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 02:02:46.874305 13628 tablet_service.cc:1468] Processing CreateTablet for tablet 0aa71a39b5b8456f8dfcd85109c726e4 (DEFAULT_TABLE table=TestTable [id=ba2335074fb64f67b45df0a737f5aca1]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:02:46.875233 13762 tablet_service.cc:1468] Processing CreateTablet for tablet 0aa71a39b5b8456f8dfcd85109c726e4 (DEFAULT_TABLE table=TestTable [id=ba2335074fb64f67b45df0a737f5aca1]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:02:46.875703 13495 tablet_service.cc:1468] Processing CreateTablet for tablet 0aa71a39b5b8456f8dfcd85109c726e4 (DEFAULT_TABLE table=TestTable [id=ba2335074fb64f67b45df0a737f5aca1]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:02:46.876376 13628 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0aa71a39b5b8456f8dfcd85109c726e4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:46.876785 13762 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0aa71a39b5b8456f8dfcd85109c726e4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:46.877489 13495 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0aa71a39b5b8456f8dfcd85109c726e4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:46.902817 13846 tablet_bootstrap.cc:492] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: Bootstrap starting.
I20250811 02:02:46.908427 13848 tablet_bootstrap.cc:492] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: Bootstrap starting.
I20250811 02:02:46.909103 13846 tablet_bootstrap.cc:654] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:46.909989 13847 tablet_bootstrap.cc:492] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: Bootstrap starting.
I20250811 02:02:46.913604 13846 log.cc:826] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:46.916601 13848 tablet_bootstrap.cc:654] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:46.916833 13847 tablet_bootstrap.cc:654] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:46.918744 13847 log.cc:826] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:46.918861 13848 log.cc:826] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:46.919745 13846 tablet_bootstrap.cc:492] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: No bootstrap required, opened a new log
I20250811 02:02:46.920295 13846 ts_tablet_manager.cc:1397] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: Time spent bootstrapping tablet: real 0.018s user 0.007s sys 0.008s
I20250811 02:02:46.924263 13848 tablet_bootstrap.cc:492] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: No bootstrap required, opened a new log
I20250811 02:02:46.924535 13847 tablet_bootstrap.cc:492] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: No bootstrap required, opened a new log
I20250811 02:02:46.924806 13848 ts_tablet_manager.cc:1397] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: Time spent bootstrapping tablet: real 0.017s user 0.011s sys 0.005s
I20250811 02:02:46.925048 13847 ts_tablet_manager.cc:1397] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: Time spent bootstrapping tablet: real 0.016s user 0.005s sys 0.008s
I20250811 02:02:46.943950 13848 raft_consensus.cc:357] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:46.945029 13848 raft_consensus.cc:738] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 65e94c55878d4c82b3f4247dc377c5eb, State: Initialized, Role: FOLLOWER
I20250811 02:02:46.945830 13848 consensus_queue.cc:260] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:46.947801 13846 raft_consensus.cc:357] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:46.948807 13846 raft_consensus.cc:738] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5eacc56fba70430a83f2215fc82de53a, State: Initialized, Role: FOLLOWER
I20250811 02:02:46.949546 13846 consensus_queue.cc:260] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:46.950227 13848 ts_tablet_manager.cc:1428] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: Time spent starting tablet: real 0.025s user 0.021s sys 0.003s
I20250811 02:02:46.953195 13827 heartbeater.cc:499] Master 127.12.45.62:34141 was elected leader, sending a full tablet report...
I20250811 02:02:46.954550 13846 ts_tablet_manager.cc:1428] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: Time spent starting tablet: real 0.034s user 0.028s sys 0.003s
I20250811 02:02:46.954496 13847 raft_consensus.cc:357] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:46.956099 13847 raft_consensus.cc:738] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f49a6890b1fd4a2b8004633ebaad6367, State: Initialized, Role: FOLLOWER
I20250811 02:02:46.956681 13847 consensus_queue.cc:260] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:46.960358 13847 ts_tablet_manager.cc:1428] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: Time spent starting tablet: real 0.035s user 0.031s sys 0.000s
W20250811 02:02:46.964238 13828 tablet.cc:2378] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:02:46.978147 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:02:46.981647 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 65e94c55878d4c82b3f4247dc377c5eb to finish bootstrapping
I20250811 02:02:46.995833 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f49a6890b1fd4a2b8004633ebaad6367 to finish bootstrapping
I20250811 02:02:47.007146 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5eacc56fba70430a83f2215fc82de53a to finish bootstrapping
W20250811 02:02:47.023564 13561 tablet.cc:2378] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:02:47.052310 13515 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4"
dest_uuid: "65e94c55878d4c82b3f4247dc377c5eb"
from {username='slave'} at 127.0.0.1:46488
I20250811 02:02:47.053053 13515 raft_consensus.cc:491] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250811 02:02:47.053462 13515 raft_consensus.cc:3058] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:47.060314 13515 raft_consensus.cc:513] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:47.063413 13515 leader_election.cc:290] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [CANDIDATE]: Term 1 election: Requested vote from peers f49a6890b1fd4a2b8004633ebaad6367 (127.12.45.2:33753), 5eacc56fba70430a83f2215fc82de53a (127.12.45.3:38345)
I20250811 02:02:47.075476 12468 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250811 02:02:47.079006 13648 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4" candidate_uuid: "65e94c55878d4c82b3f4247dc377c5eb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "f49a6890b1fd4a2b8004633ebaad6367"
I20250811 02:02:47.079133 13782 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4" candidate_uuid: "65e94c55878d4c82b3f4247dc377c5eb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "5eacc56fba70430a83f2215fc82de53a"
I20250811 02:02:47.079859 13782 raft_consensus.cc:3058] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:47.079859 13648 raft_consensus.cc:3058] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:47.086892 13648 raft_consensus.cc:2466] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 65e94c55878d4c82b3f4247dc377c5eb in term 1.
I20250811 02:02:47.086890 13782 raft_consensus.cc:2466] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 65e94c55878d4c82b3f4247dc377c5eb in term 1.
I20250811 02:02:47.088265 13449 leader_election.cc:304] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5eacc56fba70430a83f2215fc82de53a, 65e94c55878d4c82b3f4247dc377c5eb; no voters:
I20250811 02:02:47.089329 13852 raft_consensus.cc:2802] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:02:47.091718 13852 raft_consensus.cc:695] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 LEADER]: Becoming Leader. State: Replica: 65e94c55878d4c82b3f4247dc377c5eb, State: Running, Role: LEADER
I20250811 02:02:47.092693 13852 consensus_queue.cc:237] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:47.104336 13368 catalog_manager.cc:5582] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb reported cstate change: term changed from 0 to 1, leader changed from <none> to 65e94c55878d4c82b3f4247dc377c5eb (127.12.45.1). New cstate: current_term: 1 leader_uuid: "65e94c55878d4c82b3f4247dc377c5eb" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } health_report { overall_health: UNKNOWN } } }
W20250811 02:02:47.172097 13694 tablet.cc:2378] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:02:47.181247 12468 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250811 02:02:47.386843 12468 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250811 02:02:47.485508 13852 consensus_queue.cc:1035] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [LEADER]: Connected to new peer: Peer: permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:02:47.504601 13864 consensus_queue.cc:1035] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [LEADER]: Connected to new peer: Peer: permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:02:49.383426 13515 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4"
dest_uuid: "65e94c55878d4c82b3f4247dc377c5eb"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:46490
I20250811 02:02:49.384115 13515 raft_consensus.cc:604] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 LEADER]: Received request to transfer leadership
I20250811 02:02:49.414085 13891 raft_consensus.cc:991] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb: : Instructing follower f49a6890b1fd4a2b8004633ebaad6367 to start an election
I20250811 02:02:49.414460 13880 raft_consensus.cc:1079] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 LEADER]: Signalling peer f49a6890b1fd4a2b8004633ebaad6367 to start an election
I20250811 02:02:49.415977 13648 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4"
dest_uuid: "f49a6890b1fd4a2b8004633ebaad6367"
from {username='slave'} at 127.12.45.1:38357
I20250811 02:02:49.416513 13648 raft_consensus.cc:491] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250811 02:02:49.416786 13648 raft_consensus.cc:3058] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:02:49.420931 13648 raft_consensus.cc:513] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:49.423152 13648 leader_election.cc:290] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [CANDIDATE]: Term 2 election: Requested vote from peers 65e94c55878d4c82b3f4247dc377c5eb (127.12.45.1:35143), 5eacc56fba70430a83f2215fc82de53a (127.12.45.3:38345)
I20250811 02:02:49.437942 13515 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4" candidate_uuid: "f49a6890b1fd4a2b8004633ebaad6367" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "65e94c55878d4c82b3f4247dc377c5eb"
I20250811 02:02:49.438644 13515 raft_consensus.cc:3053] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 LEADER]: Stepping down as leader of term 1
I20250811 02:02:49.438968 13515 raft_consensus.cc:738] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 65e94c55878d4c82b3f4247dc377c5eb, State: Running, Role: LEADER
I20250811 02:02:49.439631 13515 consensus_queue.cc:260] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:49.440541 13782 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4" candidate_uuid: "f49a6890b1fd4a2b8004633ebaad6367" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "5eacc56fba70430a83f2215fc82de53a"
I20250811 02:02:49.440757 13515 raft_consensus.cc:3058] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:02:49.440970 13782 raft_consensus.cc:3058] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:02:49.444965 13782 raft_consensus.cc:2466] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f49a6890b1fd4a2b8004633ebaad6367 in term 2.
I20250811 02:02:49.444976 13515 raft_consensus.cc:2466] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f49a6890b1fd4a2b8004633ebaad6367 in term 2.
I20250811 02:02:49.445992 13581 leader_election.cc:304] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 65e94c55878d4c82b3f4247dc377c5eb, f49a6890b1fd4a2b8004633ebaad6367; no voters:
I20250811 02:02:49.447813 13895 raft_consensus.cc:2802] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:02:49.449080 13895 raft_consensus.cc:695] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [term 2 LEADER]: Becoming Leader. State: Replica: f49a6890b1fd4a2b8004633ebaad6367, State: Running, Role: LEADER
I20250811 02:02:49.449945 13895 consensus_queue.cc:237] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } }
I20250811 02:02:49.457283 13369 catalog_manager.cc:5582] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 reported cstate change: term changed from 1 to 2, leader changed from 65e94c55878d4c82b3f4247dc377c5eb (127.12.45.1) to f49a6890b1fd4a2b8004633ebaad6367 (127.12.45.2). New cstate: current_term: 2 leader_uuid: "f49a6890b1fd4a2b8004633ebaad6367" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f49a6890b1fd4a2b8004633ebaad6367" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 33753 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 } health_report { overall_health: UNKNOWN } } }
I20250811 02:02:49.881289 13515 raft_consensus.cc:1273] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 2 FOLLOWER]: Refusing update from remote peer f49a6890b1fd4a2b8004633ebaad6367: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 02:02:49.882656 13895 consensus_queue.cc:1035] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [LEADER]: Connected to new peer: Peer: permanent_uuid: "65e94c55878d4c82b3f4247dc377c5eb" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 35143 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250811 02:02:49.892788 13782 raft_consensus.cc:1273] T 0aa71a39b5b8456f8dfcd85109c726e4 P 5eacc56fba70430a83f2215fc82de53a [term 2 FOLLOWER]: Refusing update from remote peer f49a6890b1fd4a2b8004633ebaad6367: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 02:02:49.894898 13904 consensus_queue.cc:1035] T 0aa71a39b5b8456f8dfcd85109c726e4 P f49a6890b1fd4a2b8004633ebaad6367 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5eacc56fba70430a83f2215fc82de53a" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 38345 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250811 02:02:52.159407 13515 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "0aa71a39b5b8456f8dfcd85109c726e4"
dest_uuid: "65e94c55878d4c82b3f4247dc377c5eb"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:46494
I20250811 02:02:52.160015 13515 raft_consensus.cc:604] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 2 FOLLOWER]: Received request to transfer leadership
I20250811 02:02:52.160353 13515 raft_consensus.cc:612] T 0aa71a39b5b8456f8dfcd85109c726e4 P 65e94c55878d4c82b3f4247dc377c5eb [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250811 02:02:53.195696 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 13431
I20250811 02:02:53.221210 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 13564
I20250811 02:02:53.245222 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 13697
I20250811 02:02:53.271238 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 13338
2025-08-11T02:02:53Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (15199 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250811 02:02:53.323069 12468 test_util.cc:276] Using random seed: 1380817699
I20250811 02:02:53.327077 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:02:53.327229 12468 ts_itest-base.cc:116] --------------
I20250811 02:02:53.327396 12468 ts_itest-base.cc:117] 3 tablet servers
I20250811 02:02:53.327538 12468 ts_itest-base.cc:118] 3 replicas per TS
I20250811 02:02:53.327680 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:02:53Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:02:53Z Disabled control of system clock
I20250811 02:02:53.363986 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:42251
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:34103
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:42251 with env {}
W20250811 02:02:53.671020 13937 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:53.671692 13937 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:53.672154 13937 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:53.704504 13937 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:02:53.704836 13937 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:53.705127 13937 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:02:53.705365 13937 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:02:53.741199 13937 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:34103
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:42251
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:42251
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:53.742772 13937 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:53.744567 13937 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:53.756850 13943 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:53.757259 13944 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:53.761015 13946 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:54.916946 13945 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250811 02:02:54.917061 13937 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:02:54.920512 13937 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:54.923175 13937 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:54.924506 13937 hybrid_clock.cc:648] HybridClock initialized: now 1754877774924463 us; error 52 us; skew 500 ppm
I20250811 02:02:54.925299 13937 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:54.931488 13937 webserver.cc:489] Webserver started at http://127.12.45.62:34099/ using document root <none> and password file <none>
I20250811 02:02:54.932487 13937 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:54.932713 13937 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:54.933194 13937 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:54.937649 13937 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "6ff7e3d870f64e808ca2edddde42df7e"
format_stamp: "Formatted at 2025-08-11 02:02:54 on dist-test-slave-xn5f"
I20250811 02:02:54.938802 13937 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "6ff7e3d870f64e808ca2edddde42df7e"
format_stamp: "Formatted at 2025-08-11 02:02:54 on dist-test-slave-xn5f"
I20250811 02:02:54.946141 13937 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.000s
I20250811 02:02:54.951818 13953 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:54.952917 13937 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 02:02:54.953223 13937 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "6ff7e3d870f64e808ca2edddde42df7e"
format_stamp: "Formatted at 2025-08-11 02:02:54 on dist-test-slave-xn5f"
I20250811 02:02:54.953557 13937 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:55.010977 13937 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:55.012610 13937 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:55.013038 13937 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:55.086697 13937 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:42251
I20250811 02:02:55.086768 14004 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:42251 every 8 connection(s)
I20250811 02:02:55.089569 13937 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:02:55.092332 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 13937
I20250811 02:02:55.092918 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:02:55.095559 14005 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:02:55.121979 14005 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e: Bootstrap starting.
I20250811 02:02:55.127779 14005 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e: Neither blocks nor log segments found. Creating new log.
I20250811 02:02:55.129494 14005 log.cc:826] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e: Log is configured to *not* fsync() on all Append() calls
I20250811 02:02:55.134351 14005 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e: No bootstrap required, opened a new log
I20250811 02:02:55.151806 14005 raft_consensus.cc:357] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6ff7e3d870f64e808ca2edddde42df7e" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 42251 } }
I20250811 02:02:55.152699 14005 raft_consensus.cc:383] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:02:55.153023 14005 raft_consensus.cc:738] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6ff7e3d870f64e808ca2edddde42df7e, State: Initialized, Role: FOLLOWER
I20250811 02:02:55.153908 14005 consensus_queue.cc:260] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6ff7e3d870f64e808ca2edddde42df7e" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 42251 } }
I20250811 02:02:55.154382 14005 raft_consensus.cc:397] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:02:55.154655 14005 raft_consensus.cc:491] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:02:55.154974 14005 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:02:55.159307 14005 raft_consensus.cc:513] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6ff7e3d870f64e808ca2edddde42df7e" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 42251 } }
I20250811 02:02:55.160071 14005 leader_election.cc:304] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6ff7e3d870f64e808ca2edddde42df7e; no voters:
I20250811 02:02:55.162612 14005 leader_election.cc:290] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:02:55.162953 14010 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:02:55.165455 14010 raft_consensus.cc:695] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [term 1 LEADER]: Becoming Leader. State: Replica: 6ff7e3d870f64e808ca2edddde42df7e, State: Running, Role: LEADER
I20250811 02:02:55.166225 14010 consensus_queue.cc:237] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6ff7e3d870f64e808ca2edddde42df7e" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 42251 } }
I20250811 02:02:55.167280 14005 sys_catalog.cc:564] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:02:55.177428 14012 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [sys.catalog]: SysCatalogTable state changed. Reason: New leader 6ff7e3d870f64e808ca2edddde42df7e. Latest consensus state: current_term: 1 leader_uuid: "6ff7e3d870f64e808ca2edddde42df7e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6ff7e3d870f64e808ca2edddde42df7e" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 42251 } } }
I20250811 02:02:55.177821 14011 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "6ff7e3d870f64e808ca2edddde42df7e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6ff7e3d870f64e808ca2edddde42df7e" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 42251 } } }
I20250811 02:02:55.178458 14012 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:55.178524 14011 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e [sys.catalog]: This master's current role is: LEADER
I20250811 02:02:55.181936 14019 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:02:55.193939 14019 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:02:55.213744 14019 catalog_manager.cc:1349] Generated new cluster ID: fba90bb9f495485ca3bee8d587e3ddd3
I20250811 02:02:55.214063 14019 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:02:55.232622 14019 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:02:55.234122 14019 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:02:55.251948 14019 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 6ff7e3d870f64e808ca2edddde42df7e: Generated new TSK 0
I20250811 02:02:55.253155 14019 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:02:55.267472 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:42251
--builtin_ntp_servers=127.12.45.20:34103
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 02:02:55.579857 14029 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:55.580394 14029 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:55.580895 14029 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:55.612699 14029 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:55.613546 14029 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:02:55.651530 14029 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:34103
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:42251
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:55.653017 14029 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:55.654767 14029 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:55.668042 14035 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:57.073441 14034 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 14029
W20250811 02:02:57.589028 14029 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.919s user 0.634s sys 1.213s
W20250811 02:02:57.590121 14029 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.921s user 0.634s sys 1.213s
W20250811 02:02:55.669373 14036 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:02:57.590636 14037 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1920 milliseconds
I20250811 02:02:57.592161 14029 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 02:02:57.592240 14038 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:57.596069 14029 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:02:57.598707 14029 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:02:57.600222 14029 hybrid_clock.cc:648] HybridClock initialized: now 1754877777600184 us; error 29 us; skew 500 ppm
I20250811 02:02:57.601280 14029 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:02:57.609179 14029 webserver.cc:489] Webserver started at http://127.12.45.1:36657/ using document root <none> and password file <none>
I20250811 02:02:57.610453 14029 fs_manager.cc:362] Metadata directory not provided
I20250811 02:02:57.610723 14029 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:02:57.611323 14029 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:02:57.617837 14029 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "f5439c7301514eb6b5fcaddbad2a6d53"
format_stamp: "Formatted at 2025-08-11 02:02:57 on dist-test-slave-xn5f"
I20250811 02:02:57.619369 14029 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "f5439c7301514eb6b5fcaddbad2a6d53"
format_stamp: "Formatted at 2025-08-11 02:02:57 on dist-test-slave-xn5f"
I20250811 02:02:57.627998 14029 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.003s sys 0.004s
I20250811 02:02:57.634048 14046 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:57.635247 14029 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250811 02:02:57.635576 14029 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "f5439c7301514eb6b5fcaddbad2a6d53"
format_stamp: "Formatted at 2025-08-11 02:02:57 on dist-test-slave-xn5f"
I20250811 02:02:57.635913 14029 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:02:57.687703 14029 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:02:57.689451 14029 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:02:57.689893 14029 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:02:57.692766 14029 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:02:57.697623 14029 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:02:57.697830 14029 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:57.698069 14029 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:02:57.698225 14029 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:02:57.867108 14029 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:44491
I20250811 02:02:57.867214 14158 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:44491 every 8 connection(s)
I20250811 02:02:57.869889 14029 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:02:57.872963 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14029
I20250811 02:02:57.873610 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:02:57.884816 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:42251
--builtin_ntp_servers=127.12.45.20:34103
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:02:57.909771 14159 heartbeater.cc:344] Connected to a master server at 127.12.45.62:42251
I20250811 02:02:57.910346 14159 heartbeater.cc:461] Registering TS with master...
I20250811 02:02:57.911706 14159 heartbeater.cc:507] Master 127.12.45.62:42251 requested a full tablet report, sending...
I20250811 02:02:57.915068 13970 ts_manager.cc:194] Registered new tserver with Master: f5439c7301514eb6b5fcaddbad2a6d53 (127.12.45.1:44491)
I20250811 02:02:57.917948 13970 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:33069
W20250811 02:02:58.194767 14163 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:02:58.195498 14163 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:02:58.196136 14163 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:02:58.227272 14163 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:02:58.228135 14163 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:02:58.261974 14163 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:34103
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:42251
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:02:58.263485 14163 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:02:58.265161 14163 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:02:58.277343 14169 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:02:58.921852 14159 heartbeater.cc:499] Master 127.12.45.62:42251 was elected leader, sending a full tablet report...
W20250811 02:02:59.681360 14168 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 14163
W20250811 02:03:00.002185 14163 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.724s user 0.663s sys 1.060s
W20250811 02:03:00.002553 14163 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.725s user 0.663s sys 1.060s
W20250811 02:02:58.278434 14170 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:00.004868 14172 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:00.007733 14171 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1724 milliseconds
I20250811 02:03:00.007800 14163 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:00.008999 14163 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:00.011101 14163 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:00.012449 14163 hybrid_clock.cc:648] HybridClock initialized: now 1754877780012402 us; error 46 us; skew 500 ppm
I20250811 02:03:00.013305 14163 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:00.019287 14163 webserver.cc:489] Webserver started at http://127.12.45.2:34605/ using document root <none> and password file <none>
I20250811 02:03:00.020219 14163 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:00.020438 14163 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:00.020889 14163 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:00.025334 14163 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee"
format_stamp: "Formatted at 2025-08-11 02:03:00 on dist-test-slave-xn5f"
I20250811 02:03:00.026604 14163 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee"
format_stamp: "Formatted at 2025-08-11 02:03:00 on dist-test-slave-xn5f"
I20250811 02:03:00.033578 14163 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.000s
I20250811 02:03:00.039078 14179 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:00.040048 14163 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250811 02:03:00.040359 14163 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee"
format_stamp: "Formatted at 2025-08-11 02:03:00 on dist-test-slave-xn5f"
I20250811 02:03:00.040680 14163 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:00.083861 14163 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:00.085433 14163 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:00.085842 14163 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:00.089010 14163 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:00.093179 14163 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:00.093400 14163 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:00.093647 14163 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:00.093806 14163 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:00.226728 14163 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:42729
I20250811 02:03:00.226830 14291 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:42729 every 8 connection(s)
I20250811 02:03:00.229393 14163 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:03:00.238247 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14163
I20250811 02:03:00.238811 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 02:03:00.245913 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:42251
--builtin_ntp_servers=127.12.45.20:34103
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:00.255190 14292 heartbeater.cc:344] Connected to a master server at 127.12.45.62:42251
I20250811 02:03:00.255731 14292 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:00.257058 14292 heartbeater.cc:507] Master 127.12.45.62:42251 requested a full tablet report, sending...
I20250811 02:03:00.259511 13970 ts_manager.cc:194] Registered new tserver with Master: 0da1e73fb15f466fb7b6e1d6ec73d0ee (127.12.45.2:42729)
I20250811 02:03:00.261279 13970 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:34921
W20250811 02:03:00.542846 14296 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:00.543395 14296 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:00.543835 14296 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:00.574452 14296 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:00.575255 14296 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:03:00.610179 14296 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:34103
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:42251
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:00.611521 14296 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:00.613138 14296 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:00.625006 14302 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:01.264674 14292 heartbeater.cc:499] Master 127.12.45.62:42251 was elected leader, sending a full tablet report...
W20250811 02:03:02.029529 14301 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 14296
W20250811 02:03:02.248015 14296 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.623s user 0.587s sys 1.035s
W20250811 02:03:02.248996 14296 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.624s user 0.587s sys 1.035s
W20250811 02:03:00.625602 14303 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:02.250401 14304 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1623 milliseconds
W20250811 02:03:02.250967 14305 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:02.251165 14296 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:02.255805 14296 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:02.257982 14296 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:02.259356 14296 hybrid_clock.cc:648] HybridClock initialized: now 1754877782259310 us; error 53 us; skew 500 ppm
I20250811 02:03:02.260187 14296 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:02.266634 14296 webserver.cc:489] Webserver started at http://127.12.45.3:38913/ using document root <none> and password file <none>
I20250811 02:03:02.267697 14296 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:02.267938 14296 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:02.268414 14296 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:02.273795 14296 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "ca0adca9c21b470cbf8ca5cc832f299d"
format_stamp: "Formatted at 2025-08-11 02:03:02 on dist-test-slave-xn5f"
I20250811 02:03:02.274914 14296 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "ca0adca9c21b470cbf8ca5cc832f299d"
format_stamp: "Formatted at 2025-08-11 02:03:02 on dist-test-slave-xn5f"
I20250811 02:03:02.281949 14296 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.000s
I20250811 02:03:02.287521 14312 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:02.288581 14296 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 02:03:02.288887 14296 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "ca0adca9c21b470cbf8ca5cc832f299d"
format_stamp: "Formatted at 2025-08-11 02:03:02 on dist-test-slave-xn5f"
I20250811 02:03:02.289199 14296 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:02.343170 14296 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:02.344688 14296 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:02.345135 14296 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:02.347729 14296 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:02.351953 14296 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:02.352149 14296 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:02.352432 14296 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:02.352579 14296 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:02.485669 14296 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:42031
I20250811 02:03:02.485786 14424 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:42031 every 8 connection(s)
I20250811 02:03:02.488534 14296 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:03:02.494853 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14296
I20250811 02:03:02.495414 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 02:03:02.509963 14425 heartbeater.cc:344] Connected to a master server at 127.12.45.62:42251
I20250811 02:03:02.510437 14425 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:02.511528 14425 heartbeater.cc:507] Master 127.12.45.62:42251 requested a full tablet report, sending...
I20250811 02:03:02.513816 13970 ts_manager.cc:194] Registered new tserver with Master: ca0adca9c21b470cbf8ca5cc832f299d (127.12.45.3:42031)
I20250811 02:03:02.515471 13970 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:56841
I20250811 02:03:02.516600 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:02.549862 13970 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59316:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250811 02:03:02.568776 13970 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 02:03:02.620427 14227 tablet_service.cc:1468] Processing CreateTablet for tablet 3eded7bd61a54b41925faa9914be009d (DEFAULT_TABLE table=TestTable [id=4a3d4331342d455589ae35551327ee6f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:02.620369 14360 tablet_service.cc:1468] Processing CreateTablet for tablet 3eded7bd61a54b41925faa9914be009d (DEFAULT_TABLE table=TestTable [id=4a3d4331342d455589ae35551327ee6f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:02.622493 14360 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3eded7bd61a54b41925faa9914be009d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:02.622502 14227 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3eded7bd61a54b41925faa9914be009d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:02.629734 14094 tablet_service.cc:1468] Processing CreateTablet for tablet 3eded7bd61a54b41925faa9914be009d (DEFAULT_TABLE table=TestTable [id=4a3d4331342d455589ae35551327ee6f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:02.631424 14094 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3eded7bd61a54b41925faa9914be009d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:02.649883 14444 tablet_bootstrap.cc:492] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Bootstrap starting.
I20250811 02:03:02.652010 14445 tablet_bootstrap.cc:492] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: Bootstrap starting.
I20250811 02:03:02.654168 14446 tablet_bootstrap.cc:492] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: Bootstrap starting.
I20250811 02:03:02.657573 14444 tablet_bootstrap.cc:654] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:02.659696 14445 tablet_bootstrap.cc:654] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:02.659965 14444 log.cc:826] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:02.661935 14446 tablet_bootstrap.cc:654] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:02.662093 14445 log.cc:826] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:02.664690 14446 log.cc:826] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:02.665614 14444 tablet_bootstrap.cc:492] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: No bootstrap required, opened a new log
I20250811 02:03:02.666134 14444 ts_tablet_manager.cc:1397] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Time spent bootstrapping tablet: real 0.017s user 0.000s sys 0.012s
I20250811 02:03:02.667827 14445 tablet_bootstrap.cc:492] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: No bootstrap required, opened a new log
I20250811 02:03:02.668393 14445 ts_tablet_manager.cc:1397] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: Time spent bootstrapping tablet: real 0.017s user 0.011s sys 0.004s
I20250811 02:03:02.670209 14446 tablet_bootstrap.cc:492] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: No bootstrap required, opened a new log
I20250811 02:03:02.670922 14446 ts_tablet_manager.cc:1397] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: Time spent bootstrapping tablet: real 0.017s user 0.016s sys 0.000s
I20250811 02:03:02.685827 14444 raft_consensus.cc:357] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.686550 14444 raft_consensus.cc:383] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:02.686839 14444 raft_consensus.cc:738] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0da1e73fb15f466fb7b6e1d6ec73d0ee, State: Initialized, Role: FOLLOWER
I20250811 02:03:02.687669 14444 consensus_queue.cc:260] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.691953 14444 ts_tablet_manager.cc:1428] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Time spent starting tablet: real 0.025s user 0.024s sys 0.000s
I20250811 02:03:02.696933 14445 raft_consensus.cc:357] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.697077 14446 raft_consensus.cc:357] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.697824 14445 raft_consensus.cc:383] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:02.697870 14446 raft_consensus.cc:383] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:02.698134 14446 raft_consensus.cc:738] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f5439c7301514eb6b5fcaddbad2a6d53, State: Initialized, Role: FOLLOWER
I20250811 02:03:02.698207 14445 raft_consensus.cc:738] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ca0adca9c21b470cbf8ca5cc832f299d, State: Initialized, Role: FOLLOWER
I20250811 02:03:02.699153 14445 consensus_queue.cc:260] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.699097 14446 consensus_queue.cc:260] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.703342 14425 heartbeater.cc:499] Master 127.12.45.62:42251 was elected leader, sending a full tablet report...
I20250811 02:03:02.705204 14446 ts_tablet_manager.cc:1428] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: Time spent starting tablet: real 0.034s user 0.021s sys 0.011s
I20250811 02:03:02.706128 14445 ts_tablet_manager.cc:1428] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: Time spent starting tablet: real 0.037s user 0.033s sys 0.000s
W20250811 02:03:02.736583 14293 tablet.cc:2378] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:03:02.739658 14451 raft_consensus.cc:491] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:03:02.740211 14451 raft_consensus.cc:513] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
W20250811 02:03:02.743080 14426 tablet.cc:2378] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:03:02.743132 14451 leader_election.cc:290] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers ca0adca9c21b470cbf8ca5cc832f299d (127.12.45.3:42031), 0da1e73fb15f466fb7b6e1d6ec73d0ee (127.12.45.2:42729)
I20250811 02:03:02.754279 14380 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3eded7bd61a54b41925faa9914be009d" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" is_pre_election: true
I20250811 02:03:02.754626 14247 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3eded7bd61a54b41925faa9914be009d" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" is_pre_election: true
I20250811 02:03:02.755326 14380 raft_consensus.cc:2466] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 0.
I20250811 02:03:02.755518 14247 raft_consensus.cc:2466] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 0.
I20250811 02:03:02.756677 14049 leader_election.cc:304] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: ca0adca9c21b470cbf8ca5cc832f299d, f5439c7301514eb6b5fcaddbad2a6d53; no voters:
I20250811 02:03:02.757463 14451 raft_consensus.cc:2802] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:03:02.757731 14451 raft_consensus.cc:491] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:03:02.757937 14451 raft_consensus.cc:3058] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:02.762118 14451 raft_consensus.cc:513] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.763635 14451 leader_election.cc:290] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 election: Requested vote from peers ca0adca9c21b470cbf8ca5cc832f299d (127.12.45.3:42031), 0da1e73fb15f466fb7b6e1d6ec73d0ee (127.12.45.2:42729)
I20250811 02:03:02.764282 14380 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3eded7bd61a54b41925faa9914be009d" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ca0adca9c21b470cbf8ca5cc832f299d"
I20250811 02:03:02.764590 14247 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3eded7bd61a54b41925faa9914be009d" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee"
I20250811 02:03:02.764726 14380 raft_consensus.cc:3058] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:02.765009 14247 raft_consensus.cc:3058] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:02.769187 14380 raft_consensus.cc:2466] T 3eded7bd61a54b41925faa9914be009d P ca0adca9c21b470cbf8ca5cc832f299d [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 1.
I20250811 02:03:02.769541 14247 raft_consensus.cc:2466] T 3eded7bd61a54b41925faa9914be009d P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 1.
I20250811 02:03:02.770015 14049 leader_election.cc:304] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: ca0adca9c21b470cbf8ca5cc832f299d, f5439c7301514eb6b5fcaddbad2a6d53; no voters:
I20250811 02:03:02.770747 14451 raft_consensus.cc:2802] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:02.772691 14451 raft_consensus.cc:695] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [term 1 LEADER]: Becoming Leader. State: Replica: f5439c7301514eb6b5fcaddbad2a6d53, State: Running, Role: LEADER
I20250811 02:03:02.773710 14451 consensus_queue.cc:237] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } }
I20250811 02:03:02.784258 13970 catalog_manager.cc:5582] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 reported cstate change: term changed from 0 to 1, leader changed from <none> to f5439c7301514eb6b5fcaddbad2a6d53 (127.12.45.1). New cstate: current_term: 1 leader_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } health_report { overall_health: HEALTHY } } }
I20250811 02:03:02.819095 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:02.822400 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f5439c7301514eb6b5fcaddbad2a6d53 to finish bootstrapping
I20250811 02:03:02.835273 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 0da1e73fb15f466fb7b6e1d6ec73d0ee to finish bootstrapping
I20250811 02:03:02.845355 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ca0adca9c21b470cbf8ca5cc832f299d to finish bootstrapping
I20250811 02:03:02.857702 13970 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59316:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250811 02:03:02.859225 13970 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 02:03:02.877936 14094 tablet_service.cc:1468] Processing CreateTablet for tablet 536b2d551d98433d9b3704628701ddd3 (DEFAULT_TABLE table=TestAnotherTable [id=22ae17a5f1b04e7bb8f03ff763c2e516]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250811 02:03:02.878367 14227 tablet_service.cc:1468] Processing CreateTablet for tablet 536b2d551d98433d9b3704628701ddd3 (DEFAULT_TABLE table=TestAnotherTable [id=22ae17a5f1b04e7bb8f03ff763c2e516]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250811 02:03:02.879261 14094 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 536b2d551d98433d9b3704628701ddd3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:02.879510 14227 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 536b2d551d98433d9b3704628701ddd3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:02.879388 14360 tablet_service.cc:1468] Processing CreateTablet for tablet 536b2d551d98433d9b3704628701ddd3 (DEFAULT_TABLE table=TestAnotherTable [id=22ae17a5f1b04e7bb8f03ff763c2e516]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250811 02:03:02.880463 14360 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 536b2d551d98433d9b3704628701ddd3. 1 dirs total, 0 dirs full, 0 dirs failed
W20250811 02:03:02.883241 14160 tablet.cc:2378] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:03:02.893008 14446 tablet_bootstrap.cc:492] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53: Bootstrap starting.
I20250811 02:03:02.893961 14444 tablet_bootstrap.cc:492] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Bootstrap starting.
I20250811 02:03:02.898790 14445 tablet_bootstrap.cc:492] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d: Bootstrap starting.
I20250811 02:03:02.899195 14446 tablet_bootstrap.cc:654] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:02.905170 14445 tablet_bootstrap.cc:654] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:02.906741 14444 tablet_bootstrap.cc:654] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:02.907171 14446 tablet_bootstrap.cc:492] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53: No bootstrap required, opened a new log
I20250811 02:03:02.907609 14446 ts_tablet_manager.cc:1397] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53: Time spent bootstrapping tablet: real 0.015s user 0.010s sys 0.004s
I20250811 02:03:02.912448 14446 raft_consensus.cc:357] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:02.913211 14446 raft_consensus.cc:383] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:02.913491 14446 raft_consensus.cc:738] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f5439c7301514eb6b5fcaddbad2a6d53, State: Initialized, Role: FOLLOWER
I20250811 02:03:02.914155 14446 consensus_queue.cc:260] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:02.915942 14446 ts_tablet_manager.cc:1428] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53: Time spent starting tablet: real 0.008s user 0.005s sys 0.000s
I20250811 02:03:02.919023 14445 tablet_bootstrap.cc:492] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d: No bootstrap required, opened a new log
I20250811 02:03:02.919423 14445 ts_tablet_manager.cc:1397] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d: Time spent bootstrapping tablet: real 0.021s user 0.013s sys 0.004s
I20250811 02:03:02.920539 14444 tablet_bootstrap.cc:492] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee: No bootstrap required, opened a new log
I20250811 02:03:02.920945 14444 ts_tablet_manager.cc:1397] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Time spent bootstrapping tablet: real 0.027s user 0.014s sys 0.002s
I20250811 02:03:02.922010 14445 raft_consensus.cc:357] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:02.922791 14445 raft_consensus.cc:383] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:02.923121 14445 raft_consensus.cc:738] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ca0adca9c21b470cbf8ca5cc832f299d, State: Initialized, Role: FOLLOWER
I20250811 02:03:02.923661 14444 raft_consensus.cc:357] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:02.923800 14445 consensus_queue.cc:260] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:02.924526 14444 raft_consensus.cc:383] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:02.924826 14444 raft_consensus.cc:738] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0da1e73fb15f466fb7b6e1d6ec73d0ee, State: Initialized, Role: FOLLOWER
I20250811 02:03:02.925618 14444 consensus_queue.cc:260] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:02.929302 14444 ts_tablet_manager.cc:1428] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee: Time spent starting tablet: real 0.008s user 0.002s sys 0.005s
I20250811 02:03:02.930872 14445 ts_tablet_manager.cc:1428] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d: Time spent starting tablet: real 0.011s user 0.002s sys 0.004s
I20250811 02:03:03.167804 14451 raft_consensus.cc:491] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:03:03.168372 14451 raft_consensus.cc:513] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:03.169857 14451 leader_election.cc:290] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 0da1e73fb15f466fb7b6e1d6ec73d0ee (127.12.45.2:42729), ca0adca9c21b470cbf8ca5cc832f299d (127.12.45.3:42031)
I20250811 02:03:03.170781 14247 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "536b2d551d98433d9b3704628701ddd3" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" is_pre_election: true
I20250811 02:03:03.170837 14380 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "536b2d551d98433d9b3704628701ddd3" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" is_pre_election: true
I20250811 02:03:03.171339 14247 raft_consensus.cc:2466] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 0.
I20250811 02:03:03.171372 14380 raft_consensus.cc:2466] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 0.
I20250811 02:03:03.172163 14049 leader_election.cc:304] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: ca0adca9c21b470cbf8ca5cc832f299d, f5439c7301514eb6b5fcaddbad2a6d53; no voters:
I20250811 02:03:03.172757 14451 raft_consensus.cc:2802] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:03:03.173075 14451 raft_consensus.cc:491] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:03:03.173337 14451 raft_consensus.cc:3058] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:03.177161 14451 raft_consensus.cc:513] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:03.178411 14451 leader_election.cc:290] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 election: Requested vote from peers 0da1e73fb15f466fb7b6e1d6ec73d0ee (127.12.45.2:42729), ca0adca9c21b470cbf8ca5cc832f299d (127.12.45.3:42031)
I20250811 02:03:03.179136 14247 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "536b2d551d98433d9b3704628701ddd3" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee"
I20250811 02:03:03.179349 14380 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "536b2d551d98433d9b3704628701ddd3" candidate_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ca0adca9c21b470cbf8ca5cc832f299d"
I20250811 02:03:03.179529 14247 raft_consensus.cc:3058] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:03.179788 14380 raft_consensus.cc:3058] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:03.183450 14247 raft_consensus.cc:2466] T 536b2d551d98433d9b3704628701ddd3 P 0da1e73fb15f466fb7b6e1d6ec73d0ee [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 1.
I20250811 02:03:03.183683 14380 raft_consensus.cc:2466] T 536b2d551d98433d9b3704628701ddd3 P ca0adca9c21b470cbf8ca5cc832f299d [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f5439c7301514eb6b5fcaddbad2a6d53 in term 1.
I20250811 02:03:03.184389 14050 leader_election.cc:304] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0da1e73fb15f466fb7b6e1d6ec73d0ee, f5439c7301514eb6b5fcaddbad2a6d53; no voters:
I20250811 02:03:03.185030 14451 raft_consensus.cc:2802] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:03.185354 14451 raft_consensus.cc:695] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [term 1 LEADER]: Becoming Leader. State: Replica: f5439c7301514eb6b5fcaddbad2a6d53, State: Running, Role: LEADER
I20250811 02:03:03.186069 14451 consensus_queue.cc:237] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } }
I20250811 02:03:03.189994 14451 consensus_queue.cc:1035] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:03:03.193784 13969 catalog_manager.cc:5582] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 reported cstate change: term changed from 0 to 1, leader changed from <none> to f5439c7301514eb6b5fcaddbad2a6d53 (127.12.45.1). New cstate: current_term: 1 leader_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f5439c7301514eb6b5fcaddbad2a6d53" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 44491 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 } health_report { overall_health: UNKNOWN } } }
I20250811 02:03:03.206887 14465 consensus_queue.cc:1035] T 3eded7bd61a54b41925faa9914be009d P f5439c7301514eb6b5fcaddbad2a6d53 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250811 02:03:03.571383 14468 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:03.572005 14468 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:03.602003 14468 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250811 02:03:03.721781 14455 consensus_queue.cc:1035] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0da1e73fb15f466fb7b6e1d6ec73d0ee" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42729 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250811 02:03:03.742044 14455 consensus_queue.cc:1035] T 536b2d551d98433d9b3704628701ddd3 P f5439c7301514eb6b5fcaddbad2a6d53 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca0adca9c21b470cbf8ca5cc832f299d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 42031 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250811 02:03:04.902983 14468 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.261s user 0.493s sys 0.761s
W20250811 02:03:04.903277 14468 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.261s user 0.497s sys 0.761s
W20250811 02:03:06.283730 14491 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:06.284435 14491 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:06.316259 14491 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 02:03:07.593942 14491 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.237s user 0.465s sys 0.768s
W20250811 02:03:07.594375 14491 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.238s user 0.465s sys 0.768s
W20250811 02:03:08.990315 14507 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:08.991024 14507 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:09.022694 14507 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 02:03:10.269857 14507 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.206s user 0.414s sys 0.788s
W20250811 02:03:10.270299 14507 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.206s user 0.416s sys 0.790s
W20250811 02:03:11.649067 14521 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:11.649717 14521 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:11.680474 14521 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 02:03:12.943599 14521 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.223s user 0.418s sys 0.804s
W20250811 02:03:12.944032 14521 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.223s user 0.418s sys 0.804s
I20250811 02:03:14.021386 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14029
I20250811 02:03:14.048507 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14163
I20250811 02:03:14.072525 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14296
I20250811 02:03:14.097307 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 13937
2025-08-11T02:03:14Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (20827 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250811 02:03:14.150545 12468 test_util.cc:276] Using random seed: 1401645168
I20250811 02:03:14.154843 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:03:14.155055 12468 ts_itest-base.cc:116] --------------
I20250811 02:03:14.155218 12468 ts_itest-base.cc:117] 3 tablet servers
I20250811 02:03:14.155377 12468 ts_itest-base.cc:118] 3 replicas per TS
I20250811 02:03:14.155540 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:03:14Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:03:14Z Disabled control of system clock
I20250811 02:03:14.191672 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:36633
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:37237
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:36633
--superuser_acl=no-such-user with env {}
W20250811 02:03:14.504377 14543 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:14.504952 14543 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:14.505399 14543 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:14.536973 14543 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:03:14.537313 14543 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:14.537529 14543 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:03:14.537744 14543 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:03:14.573148 14543 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:37237
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:36633
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:36633
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:14.574551 14543 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:14.576107 14543 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:14.586793 14549 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:14.587858 14550 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:15.775044 14552 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:15.777724 14551 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1186 milliseconds
I20250811 02:03:15.777879 14543 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:15.779179 14543 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:15.782385 14543 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:15.783789 14543 hybrid_clock.cc:648] HybridClock initialized: now 1754877795783749 us; error 51 us; skew 500 ppm
I20250811 02:03:15.784621 14543 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:15.791293 14543 webserver.cc:489] Webserver started at http://127.12.45.62:37445/ using document root <none> and password file <none>
I20250811 02:03:15.792268 14543 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:15.792475 14543 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:15.793090 14543 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:15.797618 14543 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "942fe201c37c4aa1bd1e6bda6adc79cc"
format_stamp: "Formatted at 2025-08-11 02:03:15 on dist-test-slave-xn5f"
I20250811 02:03:15.798683 14543 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "942fe201c37c4aa1bd1e6bda6adc79cc"
format_stamp: "Formatted at 2025-08-11 02:03:15 on dist-test-slave-xn5f"
I20250811 02:03:15.806445 14543 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250811 02:03:15.811997 14559 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:15.813102 14543 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250811 02:03:15.813417 14543 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "942fe201c37c4aa1bd1e6bda6adc79cc"
format_stamp: "Formatted at 2025-08-11 02:03:15 on dist-test-slave-xn5f"
I20250811 02:03:15.813742 14543 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:15.871508 14543 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:15.873070 14543 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:15.873518 14543 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:15.952127 14543 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:36633
I20250811 02:03:15.952194 14610 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:36633 every 8 connection(s)
I20250811 02:03:15.954861 14543 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:03:15.957621 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14543
I20250811 02:03:15.958223 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:03:15.960729 14611 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:15.986737 14611 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc: Bootstrap starting.
I20250811 02:03:15.992774 14611 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:15.994613 14611 log.cc:826] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:15.999269 14611 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc: No bootstrap required, opened a new log
I20250811 02:03:16.016741 14611 raft_consensus.cc:357] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 36633 } }
I20250811 02:03:16.017364 14611 raft_consensus.cc:383] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:16.017581 14611 raft_consensus.cc:738] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 942fe201c37c4aa1bd1e6bda6adc79cc, State: Initialized, Role: FOLLOWER
I20250811 02:03:16.018340 14611 consensus_queue.cc:260] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 36633 } }
I20250811 02:03:16.018885 14611 raft_consensus.cc:397] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:16.019167 14611 raft_consensus.cc:491] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:16.019484 14611 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:16.023907 14611 raft_consensus.cc:513] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 36633 } }
I20250811 02:03:16.024760 14611 leader_election.cc:304] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 942fe201c37c4aa1bd1e6bda6adc79cc; no voters:
I20250811 02:03:16.026505 14611 leader_election.cc:290] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:03:16.027307 14616 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:16.029440 14616 raft_consensus.cc:695] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [term 1 LEADER]: Becoming Leader. State: Replica: 942fe201c37c4aa1bd1e6bda6adc79cc, State: Running, Role: LEADER
I20250811 02:03:16.030166 14616 consensus_queue.cc:237] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 36633 } }
I20250811 02:03:16.031245 14611 sys_catalog.cc:564] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:03:16.041850 14618 sys_catalog.cc:455] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [sys.catalog]: SysCatalogTable state changed. Reason: New leader 942fe201c37c4aa1bd1e6bda6adc79cc. Latest consensus state: current_term: 1 leader_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 36633 } } }
I20250811 02:03:16.041579 14617 sys_catalog.cc:455] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "942fe201c37c4aa1bd1e6bda6adc79cc" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 36633 } } }
I20250811 02:03:16.042538 14617 sys_catalog.cc:458] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:16.042538 14618 sys_catalog.cc:458] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:16.046608 14625 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:03:16.060735 14625 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:03:16.077116 14625 catalog_manager.cc:1349] Generated new cluster ID: 3290ad46554b459e9fa9b58d51565b7d
I20250811 02:03:16.077445 14625 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:03:16.096385 14625 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:03:16.098038 14625 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:03:16.111850 14625 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 942fe201c37c4aa1bd1e6bda6adc79cc: Generated new TSK 0
I20250811 02:03:16.112789 14625 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:03:16.137449 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:36633
--builtin_ntp_servers=127.12.45.20:37237
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 02:03:16.440301 14635 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:16.440853 14635 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:16.441359 14635 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:16.472795 14635 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:16.473711 14635 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:03:16.509292 14635 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:37237
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:36633
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:16.510736 14635 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:16.512393 14635 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:16.525067 14641 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:16.526780 14642 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:18.071033 14643 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1544 milliseconds
W20250811 02:03:17.932087 14640 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 14635
W20250811 02:03:18.070340 14635 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.544s user 0.446s sys 1.095s
W20250811 02:03:18.071949 14635 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.546s user 0.447s sys 1.095s
I20250811 02:03:18.072777 14635 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 02:03:18.072863 14644 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:18.075937 14635 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:18.078265 14635 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:18.079653 14635 hybrid_clock.cc:648] HybridClock initialized: now 1754877798079620 us; error 35 us; skew 500 ppm
I20250811 02:03:18.080441 14635 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:18.087044 14635 webserver.cc:489] Webserver started at http://127.12.45.1:43131/ using document root <none> and password file <none>
I20250811 02:03:18.088011 14635 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:18.088207 14635 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:18.088670 14635 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:18.092954 14635 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "3faf07e616b44ef6b1001229bf6164ab"
format_stamp: "Formatted at 2025-08-11 02:03:18 on dist-test-slave-xn5f"
I20250811 02:03:18.094055 14635 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "3faf07e616b44ef6b1001229bf6164ab"
format_stamp: "Formatted at 2025-08-11 02:03:18 on dist-test-slave-xn5f"
I20250811 02:03:18.101568 14635 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250811 02:03:18.107554 14652 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:18.108716 14635 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 02:03:18.109071 14635 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "3faf07e616b44ef6b1001229bf6164ab"
format_stamp: "Formatted at 2025-08-11 02:03:18 on dist-test-slave-xn5f"
I20250811 02:03:18.109460 14635 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:18.164853 14635 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:18.166364 14635 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:18.166786 14635 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:18.169770 14635 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:18.173936 14635 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:18.174153 14635 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:18.174412 14635 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:18.174584 14635 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:18.352298 14635 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:37415
I20250811 02:03:18.352491 14764 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:37415 every 8 connection(s)
I20250811 02:03:18.355062 14635 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:03:18.364878 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14635
I20250811 02:03:18.365408 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:03:18.372481 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:36633
--builtin_ntp_servers=127.12.45.20:37237
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:18.383846 14765 heartbeater.cc:344] Connected to a master server at 127.12.45.62:36633
I20250811 02:03:18.384301 14765 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:18.385430 14765 heartbeater.cc:507] Master 127.12.45.62:36633 requested a full tablet report, sending...
I20250811 02:03:18.388310 14576 ts_manager.cc:194] Registered new tserver with Master: 3faf07e616b44ef6b1001229bf6164ab (127.12.45.1:37415)
I20250811 02:03:18.390492 14576 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:44599
W20250811 02:03:18.689764 14769 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:18.690276 14769 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:18.690722 14769 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:18.724817 14769 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:18.725610 14769 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:03:18.761065 14769 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:37237
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:36633
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:18.762410 14769 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:18.764022 14769 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:18.776330 14775 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:19.393994 14765 heartbeater.cc:499] Master 127.12.45.62:36633 was elected leader, sending a full tablet report...
W20250811 02:03:20.178505 14774 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 14769
W20250811 02:03:18.776829 14776 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:20.469384 14769 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.692s user 0.630s sys 1.062s
W20250811 02:03:20.469861 14769 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.693s user 0.630s sys 1.062s
W20250811 02:03:20.471465 14778 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:20.475798 14777 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1697 milliseconds
I20250811 02:03:20.475819 14769 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:20.476979 14769 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:20.479058 14769 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:20.480404 14769 hybrid_clock.cc:648] HybridClock initialized: now 1754877800480358 us; error 45 us; skew 500 ppm
I20250811 02:03:20.481170 14769 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:20.487725 14769 webserver.cc:489] Webserver started at http://127.12.45.2:35763/ using document root <none> and password file <none>
I20250811 02:03:20.488695 14769 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:20.488924 14769 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:20.489413 14769 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:20.493737 14769 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18"
format_stamp: "Formatted at 2025-08-11 02:03:20 on dist-test-slave-xn5f"
I20250811 02:03:20.495033 14769 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18"
format_stamp: "Formatted at 2025-08-11 02:03:20 on dist-test-slave-xn5f"
I20250811 02:03:20.501986 14769 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250811 02:03:20.507656 14785 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:20.508603 14769 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 02:03:20.508925 14769 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18"
format_stamp: "Formatted at 2025-08-11 02:03:20 on dist-test-slave-xn5f"
I20250811 02:03:20.509254 14769 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:20.580633 14769 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:20.582130 14769 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:20.582559 14769 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:20.585104 14769 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:20.589200 14769 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:20.589437 14769 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:20.589680 14769 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:20.589855 14769 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:20.724591 14769 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:42411
I20250811 02:03:20.724685 14897 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:42411 every 8 connection(s)
I20250811 02:03:20.727358 14769 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:03:20.735445 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14769
I20250811 02:03:20.735836 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 02:03:20.743181 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:36633
--builtin_ntp_servers=127.12.45.20:37237
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:20.750231 14898 heartbeater.cc:344] Connected to a master server at 127.12.45.62:36633
I20250811 02:03:20.750794 14898 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:20.752089 14898 heartbeater.cc:507] Master 127.12.45.62:36633 requested a full tablet report, sending...
I20250811 02:03:20.754487 14576 ts_manager.cc:194] Registered new tserver with Master: be2f6864b5a34ca48c2bb3d2ce5bec18 (127.12.45.2:42411)
I20250811 02:03:20.756332 14576 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:34971
W20250811 02:03:21.037375 14902 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:21.037900 14902 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:21.038336 14902 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:21.068761 14902 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:21.069545 14902 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:03:21.102665 14902 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:37237
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:36633
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:21.104118 14902 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:21.105779 14902 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:21.117170 14908 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:21.759658 14898 heartbeater.cc:499] Master 127.12.45.62:36633 was elected leader, sending a full tablet report...
W20250811 02:03:22.521322 14907 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 14902
W20250811 02:03:21.118870 14909 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:22.784547 14902 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.666s user 0.503s sys 1.155s
W20250811 02:03:22.786222 14902 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.668s user 0.503s sys 1.155s
W20250811 02:03:22.786674 14911 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:22.789237 14910 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1667 milliseconds
I20250811 02:03:22.789294 14902 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:22.790390 14902 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:22.792466 14902 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:22.793817 14902 hybrid_clock.cc:648] HybridClock initialized: now 1754877802793761 us; error 38 us; skew 500 ppm
I20250811 02:03:22.794574 14902 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:22.800336 14902 webserver.cc:489] Webserver started at http://127.12.45.3:39703/ using document root <none> and password file <none>
I20250811 02:03:22.801223 14902 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:22.801433 14902 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:22.801839 14902 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:22.806118 14902 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "509e5bd04c77452491ad84e4bc14cbbb"
format_stamp: "Formatted at 2025-08-11 02:03:22 on dist-test-slave-xn5f"
I20250811 02:03:22.807263 14902 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "509e5bd04c77452491ad84e4bc14cbbb"
format_stamp: "Formatted at 2025-08-11 02:03:22 on dist-test-slave-xn5f"
I20250811 02:03:22.813853 14902 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250811 02:03:22.819188 14918 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:22.820101 14902 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 02:03:22.820431 14902 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "509e5bd04c77452491ad84e4bc14cbbb"
format_stamp: "Formatted at 2025-08-11 02:03:22 on dist-test-slave-xn5f"
I20250811 02:03:22.820744 14902 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:22.881434 14902 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:22.883097 14902 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:22.883610 14902 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:22.886065 14902 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:22.889940 14902 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:22.890123 14902 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:22.890321 14902 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:22.890453 14902 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:23.022609 14902 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:35213
I20250811 02:03:23.022704 15030 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:35213 every 8 connection(s)
I20250811 02:03:23.025203 14902 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:03:23.027781 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 14902
I20250811 02:03:23.028326 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 02:03:23.046644 15031 heartbeater.cc:344] Connected to a master server at 127.12.45.62:36633
I20250811 02:03:23.047186 15031 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:23.048555 15031 heartbeater.cc:507] Master 127.12.45.62:36633 requested a full tablet report, sending...
I20250811 02:03:23.050696 14576 ts_manager.cc:194] Registered new tserver with Master: 509e5bd04c77452491ad84e4bc14cbbb (127.12.45.3:35213)
I20250811 02:03:23.052098 14576 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:55757
I20250811 02:03:23.063192 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:23.095315 14576 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:50428:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250811 02:03:23.114101 14576 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 02:03:23.165401 14966 tablet_service.cc:1468] Processing CreateTablet for tablet 2027d12c133741d1abd394cfb1a19b82 (DEFAULT_TABLE table=TestTable [id=a23caec0834f41b2b98a0a8f234cfc58]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:23.167657 14966 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2027d12c133741d1abd394cfb1a19b82. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:23.173691 14700 tablet_service.cc:1468] Processing CreateTablet for tablet 2027d12c133741d1abd394cfb1a19b82 (DEFAULT_TABLE table=TestTable [id=a23caec0834f41b2b98a0a8f234cfc58]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:23.174104 14833 tablet_service.cc:1468] Processing CreateTablet for tablet 2027d12c133741d1abd394cfb1a19b82 (DEFAULT_TABLE table=TestTable [id=a23caec0834f41b2b98a0a8f234cfc58]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:23.175765 14700 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2027d12c133741d1abd394cfb1a19b82. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:23.175959 14833 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2027d12c133741d1abd394cfb1a19b82. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:23.192955 15050 tablet_bootstrap.cc:492] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: Bootstrap starting.
I20250811 02:03:23.199165 15050 tablet_bootstrap.cc:654] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:23.200989 15050 log.cc:826] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:23.204135 15051 tablet_bootstrap.cc:492] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: Bootstrap starting.
I20250811 02:03:23.207084 15052 tablet_bootstrap.cc:492] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: Bootstrap starting.
I20250811 02:03:23.211395 15050 tablet_bootstrap.cc:492] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: No bootstrap required, opened a new log
I20250811 02:03:23.211927 15050 ts_tablet_manager.cc:1397] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: Time spent bootstrapping tablet: real 0.020s user 0.006s sys 0.012s
I20250811 02:03:23.212677 15052 tablet_bootstrap.cc:654] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:23.213637 15051 tablet_bootstrap.cc:654] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:23.214331 15052 log.cc:826] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:23.215931 15051 log.cc:826] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:23.219473 15052 tablet_bootstrap.cc:492] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: No bootstrap required, opened a new log
I20250811 02:03:23.219929 15052 ts_tablet_manager.cc:1397] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: Time spent bootstrapping tablet: real 0.013s user 0.005s sys 0.007s
I20250811 02:03:23.222177 15051 tablet_bootstrap.cc:492] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: No bootstrap required, opened a new log
I20250811 02:03:23.222642 15051 ts_tablet_manager.cc:1397] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: Time spent bootstrapping tablet: real 0.020s user 0.009s sys 0.005s
I20250811 02:03:23.236918 15050 raft_consensus.cc:357] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.237056 15052 raft_consensus.cc:357] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.237936 15052 raft_consensus.cc:383] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:23.237996 15050 raft_consensus.cc:383] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:23.238266 15052 raft_consensus.cc:738] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: be2f6864b5a34ca48c2bb3d2ce5bec18, State: Initialized, Role: FOLLOWER
I20250811 02:03:23.238373 15050 raft_consensus.cc:738] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 509e5bd04c77452491ad84e4bc14cbbb, State: Initialized, Role: FOLLOWER
I20250811 02:03:23.239394 15052 consensus_queue.cc:260] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.239264 15050 consensus_queue.cc:260] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.243675 15052 ts_tablet_manager.cc:1428] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: Time spent starting tablet: real 0.023s user 0.017s sys 0.006s
I20250811 02:03:23.243913 15031 heartbeater.cc:499] Master 127.12.45.62:36633 was elected leader, sending a full tablet report...
I20250811 02:03:23.245271 15050 ts_tablet_manager.cc:1428] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: Time spent starting tablet: real 0.033s user 0.024s sys 0.009s
I20250811 02:03:23.245932 15057 raft_consensus.cc:491] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:03:23.246510 15057 raft_consensus.cc:513] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.251742 15057 leader_election.cc:290] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 3faf07e616b44ef6b1001229bf6164ab (127.12.45.1:37415), be2f6864b5a34ca48c2bb3d2ce5bec18 (127.12.45.2:42411)
I20250811 02:03:23.252192 15051 raft_consensus.cc:357] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.253250 15051 raft_consensus.cc:383] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:23.253582 15051 raft_consensus.cc:738] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3faf07e616b44ef6b1001229bf6164ab, State: Initialized, Role: FOLLOWER
I20250811 02:03:23.254701 15051 consensus_queue.cc:260] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.267010 15051 ts_tablet_manager.cc:1428] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: Time spent starting tablet: real 0.044s user 0.031s sys 0.010s
I20250811 02:03:23.268134 14853 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2027d12c133741d1abd394cfb1a19b82" candidate_uuid: "509e5bd04c77452491ad84e4bc14cbbb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" is_pre_election: true
I20250811 02:03:23.269114 14853 raft_consensus.cc:2466] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 509e5bd04c77452491ad84e4bc14cbbb in term 0.
I20250811 02:03:23.269551 14720 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2027d12c133741d1abd394cfb1a19b82" candidate_uuid: "509e5bd04c77452491ad84e4bc14cbbb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3faf07e616b44ef6b1001229bf6164ab" is_pre_election: true
I20250811 02:03:23.270365 14720 raft_consensus.cc:2466] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 509e5bd04c77452491ad84e4bc14cbbb in term 0.
I20250811 02:03:23.270525 14919 leader_election.cc:304] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 509e5bd04c77452491ad84e4bc14cbbb, be2f6864b5a34ca48c2bb3d2ce5bec18; no voters:
I20250811 02:03:23.271485 15057 raft_consensus.cc:2802] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:03:23.271840 15057 raft_consensus.cc:491] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:03:23.272199 15057 raft_consensus.cc:3058] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:23.276932 15057 raft_consensus.cc:513] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.278129 15057 leader_election.cc:290] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [CANDIDATE]: Term 1 election: Requested vote from peers 3faf07e616b44ef6b1001229bf6164ab (127.12.45.1:37415), be2f6864b5a34ca48c2bb3d2ce5bec18 (127.12.45.2:42411)
I20250811 02:03:23.278858 14720 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2027d12c133741d1abd394cfb1a19b82" candidate_uuid: "509e5bd04c77452491ad84e4bc14cbbb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3faf07e616b44ef6b1001229bf6164ab"
I20250811 02:03:23.278952 14853 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2027d12c133741d1abd394cfb1a19b82" candidate_uuid: "509e5bd04c77452491ad84e4bc14cbbb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18"
I20250811 02:03:23.279310 14720 raft_consensus.cc:3058] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:23.279467 14853 raft_consensus.cc:3058] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [term 0 FOLLOWER]: Advancing to term 1
W20250811 02:03:23.279601 15032 tablet.cc:2378] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:03:23.283654 14720 raft_consensus.cc:2466] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 509e5bd04c77452491ad84e4bc14cbbb in term 1.
I20250811 02:03:23.283664 14853 raft_consensus.cc:2466] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 509e5bd04c77452491ad84e4bc14cbbb in term 1.
I20250811 02:03:23.284581 14920 leader_election.cc:304] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3faf07e616b44ef6b1001229bf6164ab, 509e5bd04c77452491ad84e4bc14cbbb; no voters:
I20250811 02:03:23.285362 15057 raft_consensus.cc:2802] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:23.286990 15057 raft_consensus.cc:695] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [term 1 LEADER]: Becoming Leader. State: Replica: 509e5bd04c77452491ad84e4bc14cbbb, State: Running, Role: LEADER
I20250811 02:03:23.287952 15057 consensus_queue.cc:237] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } }
I20250811 02:03:23.297719 14575 catalog_manager.cc:5582] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb reported cstate change: term changed from 0 to 1, leader changed from <none> to 509e5bd04c77452491ad84e4bc14cbbb (127.12.45.3). New cstate: current_term: 1 leader_uuid: "509e5bd04c77452491ad84e4bc14cbbb" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e5bd04c77452491ad84e4bc14cbbb" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 35213 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 } health_report { overall_health: UNKNOWN } } }
I20250811 02:03:23.326819 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:23.329874 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 3faf07e616b44ef6b1001229bf6164ab to finish bootstrapping
I20250811 02:03:23.341395 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver be2f6864b5a34ca48c2bb3d2ce5bec18 to finish bootstrapping
I20250811 02:03:23.351704 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 509e5bd04c77452491ad84e4bc14cbbb to finish bootstrapping
W20250811 02:03:23.365731 14766 tablet.cc:2378] T 2027d12c133741d1abd394cfb1a19b82 P 3faf07e616b44ef6b1001229bf6164ab: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 02:03:23.484647 14899 tablet.cc:2378] T 2027d12c133741d1abd394cfb1a19b82 P be2f6864b5a34ca48c2bb3d2ce5bec18: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:03:23.688032 15057 consensus_queue.cc:1035] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [LEADER]: Connected to new peer: Peer: permanent_uuid: "be2f6864b5a34ca48c2bb3d2ce5bec18" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 42411 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:03:23.708307 15057 consensus_queue.cc:1035] T 2027d12c133741d1abd394cfb1a19b82 P 509e5bd04c77452491ad84e4bc14cbbb [LEADER]: Connected to new peer: Peer: permanent_uuid: "3faf07e616b44ef6b1001229bf6164ab" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 37415 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250811 02:03:24.985631 14575 server_base.cc:1129] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:50442
I20250811 02:03:26.015691 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14635
I20250811 02:03:26.040428 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14769
I20250811 02:03:26.066762 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14902
I20250811 02:03:26.091645 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 14543
2025-08-11T02:03:26Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (11992 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250811 02:03:26.143460 12468 test_util.cc:276] Using random seed: 1413638079
I20250811 02:03:26.147497 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:03:26.147665 12468 ts_itest-base.cc:116] --------------
I20250811 02:03:26.147827 12468 ts_itest-base.cc:117] 3 tablet servers
I20250811 02:03:26.147979 12468 ts_itest-base.cc:118] 3 replicas per TS
I20250811 02:03:26.148114 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:03:26Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:03:26Z Disabled control of system clock
I20250811 02:03:26.183795 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:38233
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:38233 with env {}
W20250811 02:03:26.489099 15095 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:26.489719 15095 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:26.490164 15095 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:26.521571 15095 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:03:26.521965 15095 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:26.522244 15095 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:03:26.522507 15095 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:03:26.558475 15095 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:38233
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:38233
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:26.560093 15095 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:26.561894 15095 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:26.572868 15101 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:27.977756 15100 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 15095
W20250811 02:03:28.201886 15100 kernel_stack_watchdog.cc:198] Thread 15095 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 402ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:03:26.573315 15102 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:28.202730 15095 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.630s user 0.533s sys 1.096s
W20250811 02:03:28.203218 15095 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.630s user 0.534s sys 1.096s
W20250811 02:03:28.205130 15104 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:28.207332 15103 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1630 milliseconds
I20250811 02:03:28.207355 15095 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:28.208623 15095 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:28.211145 15095 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:28.212478 15095 hybrid_clock.cc:648] HybridClock initialized: now 1754877808212446 us; error 43 us; skew 500 ppm
I20250811 02:03:28.213274 15095 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:28.219434 15095 webserver.cc:489] Webserver started at http://127.12.45.62:37139/ using document root <none> and password file <none>
I20250811 02:03:28.220377 15095 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:28.220603 15095 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:28.221060 15095 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:28.225347 15095 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "486f1497202943c283c8305e5ca9a2e7"
format_stamp: "Formatted at 2025-08-11 02:03:28 on dist-test-slave-xn5f"
I20250811 02:03:28.226581 15095 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "486f1497202943c283c8305e5ca9a2e7"
format_stamp: "Formatted at 2025-08-11 02:03:28 on dist-test-slave-xn5f"
I20250811 02:03:28.233762 15095 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250811 02:03:28.239113 15112 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:28.240059 15095 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250811 02:03:28.240382 15095 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "486f1497202943c283c8305e5ca9a2e7"
format_stamp: "Formatted at 2025-08-11 02:03:28 on dist-test-slave-xn5f"
I20250811 02:03:28.240725 15095 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:28.293828 15095 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:28.295441 15095 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:28.295897 15095 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:28.368185 15095 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:38233
I20250811 02:03:28.368242 15163 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:38233 every 8 connection(s)
I20250811 02:03:28.371052 15095 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:03:28.374384 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15095
I20250811 02:03:28.375108 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:03:28.377044 15164 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:28.399158 15164 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap starting.
I20250811 02:03:28.405050 15164 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:28.406848 15164 log.cc:826] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:28.411695 15164 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: No bootstrap required, opened a new log
I20250811 02:03:28.428807 15164 raft_consensus.cc:357] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:28.429455 15164 raft_consensus.cc:383] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:28.429665 15164 raft_consensus.cc:738] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Initialized, Role: FOLLOWER
I20250811 02:03:28.430307 15164 consensus_queue.cc:260] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:28.430799 15164 raft_consensus.cc:397] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:28.431072 15164 raft_consensus.cc:491] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:28.431365 15164 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:28.435302 15164 raft_consensus.cc:513] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:28.436014 15164 leader_election.cc:304] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 486f1497202943c283c8305e5ca9a2e7; no voters:
I20250811 02:03:28.437927 15164 leader_election.cc:290] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:03:28.438707 15169 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:28.440992 15169 raft_consensus.cc:695] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 1 LEADER]: Becoming Leader. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Running, Role: LEADER
I20250811 02:03:28.441829 15169 consensus_queue.cc:237] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:28.442763 15164 sys_catalog.cc:564] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:03:28.452032 15171 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 486f1497202943c283c8305e5ca9a2e7. Latest consensus state: current_term: 1 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:28.452823 15171 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:28.452821 15170 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:28.453461 15170 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:28.456821 15178 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:03:28.468261 15178 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:03:28.485550 15178 catalog_manager.cc:1349] Generated new cluster ID: e0f8de29cc554c7283e85520427607c9
I20250811 02:03:28.485805 15178 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:03:28.498591 15178 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:03:28.500555 15178 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:03:28.520443 15178 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Generated new TSK 0
I20250811 02:03:28.521672 15178 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:03:28.532754 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:38233
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 02:03:28.839401 15188 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:28.839929 15188 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:28.840426 15188 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:28.873441 15188 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:28.874290 15188 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:03:28.910743 15188 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:28.912199 15188 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:28.913919 15188 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:28.926856 15194 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:28.929845 15195 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:28.931429 15197 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:28.931803 15188 server_base.cc:1047] running on GCE node
I20250811 02:03:30.071321 15188 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:30.074116 15188 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:30.075541 15188 hybrid_clock.cc:648] HybridClock initialized: now 1754877810075470 us; error 84 us; skew 500 ppm
I20250811 02:03:30.076382 15188 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:30.083621 15188 webserver.cc:489] Webserver started at http://127.12.45.1:33597/ using document root <none> and password file <none>
I20250811 02:03:30.084640 15188 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:30.084872 15188 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:30.085352 15188 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:30.089860 15188 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "1eb10bfe655143db90d05241378bac9e"
format_stamp: "Formatted at 2025-08-11 02:03:30 on dist-test-slave-xn5f"
I20250811 02:03:30.090904 15188 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "1eb10bfe655143db90d05241378bac9e"
format_stamp: "Formatted at 2025-08-11 02:03:30 on dist-test-slave-xn5f"
I20250811 02:03:30.098551 15188 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.001s
I20250811 02:03:30.104905 15204 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:30.106063 15188 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 02:03:30.106410 15188 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "1eb10bfe655143db90d05241378bac9e"
format_stamp: "Formatted at 2025-08-11 02:03:30 on dist-test-slave-xn5f"
I20250811 02:03:30.106757 15188 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:30.166204 15188 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:30.167838 15188 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:30.168267 15188 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:30.171119 15188 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:30.175581 15188 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:30.175786 15188 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:30.176056 15188 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:30.176203 15188 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:30.313982 15188 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:33479
I20250811 02:03:30.314054 15316 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:33479 every 8 connection(s)
I20250811 02:03:30.316545 15188 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:03:30.323997 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15188
I20250811 02:03:30.324535 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:03:30.331363 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:38233
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:30.341382 15317 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:03:30.341802 15317 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:30.342824 15317 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:03:30.345877 15129 ts_manager.cc:194] Registered new tserver with Master: 1eb10bfe655143db90d05241378bac9e (127.12.45.1:33479)
I20250811 02:03:30.349040 15129 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:35439
W20250811 02:03:30.629335 15321 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:30.629835 15321 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:30.630275 15321 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:30.661345 15321 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:30.662178 15321 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:03:30.696276 15321 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:30.697664 15321 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:30.699216 15321 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:30.711809 15327 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:31.352968 15317 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
W20250811 02:03:30.712169 15328 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:31.863045 15330 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:31.865121 15329 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1149 milliseconds
I20250811 02:03:31.865301 15321 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:31.866521 15321 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:31.869199 15321 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:31.870610 15321 hybrid_clock.cc:648] HybridClock initialized: now 1754877811870552 us; error 67 us; skew 500 ppm
I20250811 02:03:31.871423 15321 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:31.877941 15321 webserver.cc:489] Webserver started at http://127.12.45.2:42569/ using document root <none> and password file <none>
I20250811 02:03:31.878851 15321 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:31.879076 15321 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:31.879565 15321 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:31.883795 15321 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "91bc21b8f774428bae1e2365ab7e1f37"
format_stamp: "Formatted at 2025-08-11 02:03:31 on dist-test-slave-xn5f"
I20250811 02:03:31.884863 15321 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "91bc21b8f774428bae1e2365ab7e1f37"
format_stamp: "Formatted at 2025-08-11 02:03:31 on dist-test-slave-xn5f"
I20250811 02:03:31.891677 15321 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250811 02:03:31.897221 15337 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:31.898109 15321 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250811 02:03:31.898406 15321 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "91bc21b8f774428bae1e2365ab7e1f37"
format_stamp: "Formatted at 2025-08-11 02:03:31 on dist-test-slave-xn5f"
I20250811 02:03:31.898702 15321 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:31.962126 15321 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:31.963568 15321 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:31.964000 15321 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:31.966501 15321 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:31.970304 15321 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:31.970547 15321 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:31.970773 15321 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:31.970934 15321 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:32.100689 15321 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:44385
I20250811 02:03:32.100780 15449 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:44385 every 8 connection(s)
I20250811 02:03:32.103370 15321 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:03:32.110334 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15321
I20250811 02:03:32.110740 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 02:03:32.116950 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:38233
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:32.124680 15450 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:03:32.125212 15450 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:32.126562 15450 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:03:32.129065 15129 ts_manager.cc:194] Registered new tserver with Master: 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385)
I20250811 02:03:32.130264 15129 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:50091
W20250811 02:03:32.412073 15454 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:32.412560 15454 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:32.413053 15454 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:32.442536 15454 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:32.443480 15454 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:03:32.476447 15454 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:32.477998 15454 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:32.479539 15454 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:32.491889 15460 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:33.134027 15450 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
W20250811 02:03:32.492206 15461 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:33.711498 15463 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:33.714123 15462 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1216 milliseconds
W20250811 02:03:33.714903 15454 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.223s user 0.423s sys 0.795s
W20250811 02:03:33.715200 15454 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.224s user 0.423s sys 0.796s
I20250811 02:03:33.715415 15454 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:33.716462 15454 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:33.718695 15454 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:33.720047 15454 hybrid_clock.cc:648] HybridClock initialized: now 1754877813720017 us; error 39 us; skew 500 ppm
I20250811 02:03:33.720844 15454 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:33.728010 15454 webserver.cc:489] Webserver started at http://127.12.45.3:40649/ using document root <none> and password file <none>
I20250811 02:03:33.729192 15454 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:33.729447 15454 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:33.729912 15454 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:03:33.734411 15454 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "9265cb3403ac47649cd338059475e08d"
format_stamp: "Formatted at 2025-08-11 02:03:33 on dist-test-slave-xn5f"
I20250811 02:03:33.735544 15454 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "9265cb3403ac47649cd338059475e08d"
format_stamp: "Formatted at 2025-08-11 02:03:33 on dist-test-slave-xn5f"
I20250811 02:03:33.743652 15454 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.002s
I20250811 02:03:33.750000 15470 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:33.751235 15454 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.001s
I20250811 02:03:33.751607 15454 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "9265cb3403ac47649cd338059475e08d"
format_stamp: "Formatted at 2025-08-11 02:03:33 on dist-test-slave-xn5f"
I20250811 02:03:33.751910 15454 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:33.821038 15454 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:33.822443 15454 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:33.822861 15454 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:33.825328 15454 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:33.829275 15454 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:03:33.829492 15454 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:33.829727 15454 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:03:33.829874 15454 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:33.958586 15454 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:36733
I20250811 02:03:33.958724 15582 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:36733 every 8 connection(s)
I20250811 02:03:33.961099 15454 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:03:33.964807 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15454
I20250811 02:03:33.965355 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 02:03:33.981777 15583 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:03:33.982206 15583 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:33.983352 15583 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:03:33.985713 15128 ts_manager.cc:194] Registered new tserver with Master: 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
I20250811 02:03:33.987207 15128 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:54645
I20250811 02:03:33.999858 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:34.035518 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:34.035864 12468 test_util.cc:276] Using random seed: 1421530503
I20250811 02:03:34.077232 15128 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:33648:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250811 02:03:34.121529 15252 tablet_service.cc:1468] Processing CreateTablet for tablet c646bf4f65cc45208f9880e776286dc1 (DEFAULT_TABLE table=TestTable [id=af3bb3f54ad24e93a1d0f5cbb7acda82]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:34.123176 15252 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c646bf4f65cc45208f9880e776286dc1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:34.142838 15603 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap starting.
I20250811 02:03:34.148638 15603 tablet_bootstrap.cc:654] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:34.150616 15603 log.cc:826] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:34.155004 15603 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: No bootstrap required, opened a new log
I20250811 02:03:34.155403 15603 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Time spent bootstrapping tablet: real 0.013s user 0.009s sys 0.004s
I20250811 02:03:34.172575 15603 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:34.173185 15603 raft_consensus.cc:383] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:34.173384 15603 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Initialized, Role: FOLLOWER
I20250811 02:03:34.174082 15603 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:34.174748 15603 raft_consensus.cc:397] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:34.175100 15603 raft_consensus.cc:491] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:34.175437 15603 raft_consensus.cc:3058] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:34.179529 15603 raft_consensus.cc:513] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:34.180233 15603 leader_election.cc:304] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e; no voters:
I20250811 02:03:34.182045 15603 leader_election.cc:290] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:03:34.182438 15605 raft_consensus.cc:2802] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:34.184616 15605 raft_consensus.cc:695] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 LEADER]: Becoming Leader. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Running, Role: LEADER
I20250811 02:03:34.185595 15605 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:34.186074 15603 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Time spent starting tablet: real 0.030s user 0.023s sys 0.007s
I20250811 02:03:34.198957 15128 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: term changed from 0 to 1, leader changed from <none> to 1eb10bfe655143db90d05241378bac9e (127.12.45.1). New cstate: current_term: 1 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } }
I20250811 02:03:34.481854 12468 test_util.cc:276] Using random seed: 1421976477
I20250811 02:03:34.507187 15128 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:33660:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250811 02:03:34.535914 15385 tablet_service.cc:1468] Processing CreateTablet for tablet 0b62a4d4eed4485aa1f36bc304d94a53 (DEFAULT_TABLE table=TestTable1 [id=5f6a9764ad334fa59ae92a7c9f0caacd]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:34.537674 15385 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0b62a4d4eed4485aa1f36bc304d94a53. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:34.557260 15624 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap starting.
I20250811 02:03:34.563021 15624 tablet_bootstrap.cc:654] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:34.564738 15624 log.cc:826] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:34.569437 15624 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: No bootstrap required, opened a new log
I20250811 02:03:34.569844 15624 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent bootstrapping tablet: real 0.013s user 0.010s sys 0.000s
I20250811 02:03:34.587329 15624 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:34.588052 15624 raft_consensus.cc:383] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:34.588280 15624 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Initialized, Role: FOLLOWER
I20250811 02:03:34.588945 15624 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:34.589450 15624 raft_consensus.cc:397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:34.589694 15624 raft_consensus.cc:491] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:34.589985 15624 raft_consensus.cc:3058] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:34.594087 15624 raft_consensus.cc:513] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:34.594976 15624 leader_election.cc:304] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 91bc21b8f774428bae1e2365ab7e1f37; no voters:
I20250811 02:03:34.596776 15624 leader_election.cc:290] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:03:34.597177 15626 raft_consensus.cc:2802] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:34.599617 15626 raft_consensus.cc:695] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 LEADER]: Becoming Leader. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Running, Role: LEADER
I20250811 02:03:34.600613 15626 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:34.600961 15624 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent starting tablet: real 0.031s user 0.028s sys 0.004s
I20250811 02:03:34.612488 15128 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 reported cstate change: term changed from 0 to 1, leader changed from <none> to 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2). New cstate: current_term: 1 leader_uuid: "91bc21b8f774428bae1e2365ab7e1f37" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } health_report { overall_health: HEALTHY } } }
I20250811 02:03:34.789899 12468 test_util.cc:276] Using random seed: 1422284520
I20250811 02:03:34.811722 15125 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:33674:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250811 02:03:34.840564 15518 tablet_service.cc:1468] Processing CreateTablet for tablet bf8ce350bb0d4d84a7bd8dd00558a9b8 (DEFAULT_TABLE table=TestTable2 [id=c5231d6bd5a64ab2aa645cb49916cb66]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:03:34.842182 15518 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bf8ce350bb0d4d84a7bd8dd00558a9b8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:34.860944 15645 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:03:34.866434 15645 tablet_bootstrap.cc:654] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Neither blocks nor log segments found. Creating new log.
I20250811 02:03:34.868084 15645 log.cc:826] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:34.872211 15645 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: No bootstrap required, opened a new log
I20250811 02:03:34.872653 15645 ts_tablet_manager.cc:1397] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.012s user 0.004s sys 0.006s
I20250811 02:03:34.889320 15645 raft_consensus.cc:357] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:34.889871 15645 raft_consensus.cc:383] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:03:34.890148 15645 raft_consensus.cc:738] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: FOLLOWER
I20250811 02:03:34.890779 15645 consensus_queue.cc:260] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:34.891261 15645 raft_consensus.cc:397] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:34.891561 15645 raft_consensus.cc:491] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:34.891845 15645 raft_consensus.cc:3058] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:03:34.895848 15645 raft_consensus.cc:513] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:34.896548 15645 leader_election.cc:304] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9265cb3403ac47649cd338059475e08d; no voters:
I20250811 02:03:34.898808 15645 leader_election.cc:290] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:03:34.899240 15647 raft_consensus.cc:2802] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:03:34.902742 15647 raft_consensus.cc:695] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 LEADER]: Becoming Leader. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Running, Role: LEADER
I20250811 02:03:34.903754 15583 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
I20250811 02:03:34.904242 15645 ts_tablet_manager.cc:1428] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.031s user 0.029s sys 0.003s
I20250811 02:03:34.904623 15647 consensus_queue.cc:237] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:34.912557 15125 catalog_manager.cc:5582] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d reported cstate change: term changed from 0 to 1, leader changed from <none> to 9265cb3403ac47649cd338059475e08d (127.12.45.3). New cstate: current_term: 1 leader_uuid: "9265cb3403ac47649cd338059475e08d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } health_report { overall_health: HEALTHY } } }
I20250811 02:03:35.072265 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15095
W20250811 02:03:35.214150 15317 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:38233 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:38233: connect: Connection refused (error 111)
W20250811 02:03:35.313464 15579 debug-util.cc:398] Leaking SignalData structure 0x7b08000ac060 after lost signal to thread 15455
W20250811 02:03:35.641749 15450 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:38233 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:38233: connect: Connection refused (error 111)
W20250811 02:03:35.942966 15583 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:38233 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:38233: connect: Connection refused (error 111)
I20250811 02:03:40.400966 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15188
I20250811 02:03:40.423808 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15321
I20250811 02:03:40.451085 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15454
I20250811 02:03:40.482326 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:38233
--webserver_interface=127.12.45.62
--webserver_port=37139
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:38233 with env {}
W20250811 02:03:40.793596 15727 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:40.794265 15727 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:40.794742 15727 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:40.827145 15727 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:03:40.827482 15727 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:40.827723 15727 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:03:40.827980 15727 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:03:40.863143 15727 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:38233
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:38233
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=37139
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:40.864559 15727 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:40.866348 15727 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:40.877329 15733 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:42.281174 15732 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 15727
W20250811 02:03:40.877712 15734 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:42.630232 15727 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.752s user 0.596s sys 1.156s
W20250811 02:03:42.630714 15727 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.753s user 0.596s sys 1.156s
W20250811 02:03:42.631886 15736 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:42.633955 15735 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1752 milliseconds
I20250811 02:03:42.633976 15727 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:42.635346 15727 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:42.637866 15727 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:42.639191 15727 hybrid_clock.cc:648] HybridClock initialized: now 1754877822639144 us; error 47 us; skew 500 ppm
I20250811 02:03:42.640012 15727 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:42.646250 15727 webserver.cc:489] Webserver started at http://127.12.45.62:37139/ using document root <none> and password file <none>
I20250811 02:03:42.647225 15727 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:42.647465 15727 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:42.654825 15727 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.003s
I20250811 02:03:42.659283 15743 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:42.660274 15727 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 02:03:42.660594 15727 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "486f1497202943c283c8305e5ca9a2e7"
format_stamp: "Formatted at 2025-08-11 02:03:28 on dist-test-slave-xn5f"
I20250811 02:03:42.662518 15727 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:42.710611 15727 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:42.712057 15727 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:42.712582 15727 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:42.785452 15727 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:38233
I20250811 02:03:42.785501 15794 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:38233 every 8 connection(s)
I20250811 02:03:42.788410 15727 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:03:42.798429 15795 sys_catalog.cc:263] Verifying existing consensus state
I20250811 02:03:42.798591 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15727
I20250811 02:03:42.800570 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:33479
--local_ip_for_outbound_sockets=127.12.45.1
--tserver_master_addrs=127.12.45.62:38233
--webserver_port=33597
--webserver_interface=127.12.45.1
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:42.806176 15795 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap starting.
I20250811 02:03:42.817387 15795 log.cc:826] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:42.864248 15795 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:42.865031 15795 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap complete.
I20250811 02:03:42.884650 15795 raft_consensus.cc:357] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:42.886708 15795 raft_consensus.cc:738] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Initialized, Role: FOLLOWER
I20250811 02:03:42.887537 15795 consensus_queue.cc:260] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:42.888065 15795 raft_consensus.cc:397] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:42.888329 15795 raft_consensus.cc:491] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:42.888640 15795 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 2 FOLLOWER]: Advancing to term 3
I20250811 02:03:42.894089 15795 raft_consensus.cc:513] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:42.894666 15795 leader_election.cc:304] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 486f1497202943c283c8305e5ca9a2e7; no voters:
I20250811 02:03:42.896808 15795 leader_election.cc:290] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250811 02:03:42.897253 15799 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Leader election won for term 3
I20250811 02:03:42.900710 15799 raft_consensus.cc:695] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 LEADER]: Becoming Leader. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Running, Role: LEADER
I20250811 02:03:42.901506 15799 consensus_queue.cc:237] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:42.902024 15795 sys_catalog.cc:564] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:03:42.911427 15800 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:42.912010 15800 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:42.913959 15801 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 486f1497202943c283c8305e5ca9a2e7. Latest consensus state: current_term: 3 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:42.914616 15801 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:42.922519 15806 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:03:42.937088 15806 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=5a733dd1d69f4bc7a6ae0ec2d1c00eb6]
I20250811 02:03:42.939141 15806 catalog_manager.cc:671] Loaded metadata for table TestTable [id=782a33ca53994f12a00abb6cd46fe772]
I20250811 02:03:42.941084 15806 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=c5231d6bd5a64ab2aa645cb49916cb66]
I20250811 02:03:42.951010 15806 tablet_loader.cc:96] loaded metadata for tablet 0b62a4d4eed4485aa1f36bc304d94a53 (table TestTable1 [id=5a733dd1d69f4bc7a6ae0ec2d1c00eb6])
I20250811 02:03:42.952905 15806 tablet_loader.cc:96] loaded metadata for tablet bf8ce350bb0d4d84a7bd8dd00558a9b8 (table TestTable2 [id=c5231d6bd5a64ab2aa645cb49916cb66])
I20250811 02:03:42.954233 15806 tablet_loader.cc:96] loaded metadata for tablet c646bf4f65cc45208f9880e776286dc1 (table TestTable [id=782a33ca53994f12a00abb6cd46fe772])
I20250811 02:03:42.955883 15806 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:03:42.962663 15806 catalog_manager.cc:1261] Loaded cluster ID: e0f8de29cc554c7283e85520427607c9
I20250811 02:03:42.963052 15806 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:03:42.973109 15816 catalog_manager.cc:797] Waiting for catalog manager background task thread to start: Service unavailable: Catalog manager is not initialized. State: Starting
I20250811 02:03:42.973119 15806 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:03:42.977540 15806 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Loaded TSK: 0
I20250811 02:03:42.978711 15806 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250811 02:03:43.131176 15797 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:43.131695 15797 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:43.132175 15797 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:43.163367 15797 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:43.164194 15797 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:03:43.198589 15797 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:33479
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=33597
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:43.200093 15797 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:43.201735 15797 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:43.214668 15822 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:44.616047 15821 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 15797
W20250811 02:03:44.908304 15797 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.693s user 0.593s sys 0.967s
W20250811 02:03:44.910084 15824 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1694 milliseconds
W20250811 02:03:44.910246 15797 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.695s user 0.593s sys 0.967s
W20250811 02:03:43.216274 15823 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:44.910583 15825 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:44.910599 15797 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:44.915185 15797 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:44.917778 15797 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:44.919224 15797 hybrid_clock.cc:648] HybridClock initialized: now 1754877824919166 us; error 50 us; skew 500 ppm
I20250811 02:03:44.920305 15797 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:44.927678 15797 webserver.cc:489] Webserver started at http://127.12.45.1:33597/ using document root <none> and password file <none>
I20250811 02:03:44.928941 15797 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:44.929229 15797 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:44.939440 15797 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.006s sys 0.002s
I20250811 02:03:44.945287 15832 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:44.946501 15797 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.000s
I20250811 02:03:44.946887 15797 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "1eb10bfe655143db90d05241378bac9e"
format_stamp: "Formatted at 2025-08-11 02:03:30 on dist-test-slave-xn5f"
I20250811 02:03:44.949602 15797 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:45.024947 15797 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:45.027076 15797 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:45.027660 15797 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:45.031036 15797 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:45.038540 15839 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 02:03:45.045964 15797 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 02:03:45.046275 15797 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.001s
I20250811 02:03:45.046628 15797 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 02:03:45.053792 15797 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 02:03:45.054057 15797 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.005s sys 0.000s
I20250811 02:03:45.054422 15839 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap starting.
I20250811 02:03:45.117697 15839 log.cc:826] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:45.246511 15839 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:45.247144 15797 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:33479
I20250811 02:03:45.247350 15946 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:33479 every 8 connection(s)
I20250811 02:03:45.247772 15839 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap complete.
I20250811 02:03:45.249468 15839 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Time spent bootstrapping tablet: real 0.195s user 0.147s sys 0.043s
I20250811 02:03:45.250988 15797 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:03:45.256850 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15797
I20250811 02:03:45.258617 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:44385
--local_ip_for_outbound_sockets=127.12.45.2
--tserver_master_addrs=127.12.45.62:38233
--webserver_port=42569
--webserver_interface=127.12.45.2
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:45.275153 15839 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:45.279155 15839 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Initialized, Role: FOLLOWER
I20250811 02:03:45.280357 15839 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:45.281255 15839 raft_consensus.cc:397] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:45.281730 15839 raft_consensus.cc:491] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:45.282279 15839 raft_consensus.cc:3058] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:03:45.293758 15839 raft_consensus.cc:513] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:45.295078 15839 leader_election.cc:304] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e; no voters:
I20250811 02:03:45.304878 15952 raft_consensus.cc:2802] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:03:45.304616 15839 leader_election.cc:290] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 02:03:45.316864 15952 raft_consensus.cc:695] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 LEADER]: Becoming Leader. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Running, Role: LEADER
I20250811 02:03:45.317430 15947 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:03:45.318006 15947 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:45.319258 15839 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Time spent starting tablet: real 0.070s user 0.050s sys 0.009s
I20250811 02:03:45.319478 15947 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:03:45.320501 15952 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:03:45.333233 15760 ts_manager.cc:194] Registered new tserver with Master: 1eb10bfe655143db90d05241378bac9e (127.12.45.1:33479)
I20250811 02:03:45.337390 15760 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: term changed from 0 to 2, leader changed from <none> to 1eb10bfe655143db90d05241378bac9e (127.12.45.1), VOTER 1eb10bfe655143db90d05241378bac9e (127.12.45.1) added. New cstate: current_term: 2 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } }
W20250811 02:03:45.386299 15760 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c646bf4f65cc45208f9880e776286dc1 with cas_config_opid_index -1: no extra replica candidate found for tablet c646bf4f65cc45208f9880e776286dc1 (table TestTable [id=782a33ca53994f12a00abb6cd46fe772]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (1) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
I20250811 02:03:45.388710 15760 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:52093
I20250811 02:03:45.393602 15947 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
W20250811 02:03:45.619053 15951 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:45.619611 15951 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:45.620127 15951 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:45.652091 15951 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:45.652930 15951 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:03:45.687994 15951 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:44385
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=42569
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:45.689559 15951 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:45.691311 15951 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:45.705201 15966 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:45.706717 15967 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:46.998654 15951 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.294s user 0.374s sys 0.897s
W20250811 02:03:46.998764 15968 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1290 milliseconds
W20250811 02:03:46.999143 15951 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.294s user 0.378s sys 0.897s
W20250811 02:03:47.000097 15969 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:47.000174 15951 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:47.001273 15951 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:47.006250 15951 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:47.007669 15951 hybrid_clock.cc:648] HybridClock initialized: now 1754877827007638 us; error 40 us; skew 500 ppm
I20250811 02:03:47.008431 15951 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:47.015504 15951 webserver.cc:489] Webserver started at http://127.12.45.2:42569/ using document root <none> and password file <none>
I20250811 02:03:47.016456 15951 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:47.016695 15951 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:47.025110 15951 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.000s sys 0.004s
I20250811 02:03:47.030200 15977 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:47.031431 15951 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.000s
I20250811 02:03:47.031735 15951 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "91bc21b8f774428bae1e2365ab7e1f37"
format_stamp: "Formatted at 2025-08-11 02:03:31 on dist-test-slave-xn5f"
I20250811 02:03:47.033640 15951 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:47.099936 15951 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:47.101433 15951 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:47.101857 15951 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:47.104387 15951 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:47.110047 15984 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 02:03:47.117753 15951 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 02:03:47.117998 15951 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.001s
I20250811 02:03:47.118263 15951 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 02:03:47.122845 15951 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 02:03:47.123078 15951 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.005s sys 0.000s
I20250811 02:03:47.123456 15984 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap starting.
I20250811 02:03:47.177279 15984 log.cc:826] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:47.265192 15984 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:47.265967 15984 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap complete.
I20250811 02:03:47.267366 15984 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent bootstrapping tablet: real 0.144s user 0.103s sys 0.040s
I20250811 02:03:47.284760 15984 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:47.287154 15984 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Initialized, Role: FOLLOWER
I20250811 02:03:47.288452 15984 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:47.288904 15984 raft_consensus.cc:397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:47.289168 15984 raft_consensus.cc:491] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:47.289467 15984 raft_consensus.cc:3058] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:03:47.296775 15984 raft_consensus.cc:513] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:47.297472 15984 leader_election.cc:304] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 91bc21b8f774428bae1e2365ab7e1f37; no voters:
I20250811 02:03:47.301136 15984 leader_election.cc:290] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 02:03:47.301468 16089 raft_consensus.cc:2802] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:03:47.306555 15984 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent starting tablet: real 0.039s user 0.029s sys 0.006s
I20250811 02:03:47.307299 16089 raft_consensus.cc:695] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEADER]: Becoming Leader. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Running, Role: LEADER
I20250811 02:03:47.308034 16089 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } }
I20250811 02:03:47.308496 15951 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:44385
I20250811 02:03:47.308718 16094 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:44385 every 8 connection(s)
I20250811 02:03:47.311113 15951 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:03:47.313696 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 15951
I20250811 02:03:47.316169 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:36733
--local_ip_for_outbound_sockets=127.12.45.3
--tserver_master_addrs=127.12.45.62:38233
--webserver_port=40649
--webserver_interface=127.12.45.3
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:47.337087 16096 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:03:47.337524 16096 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:47.338511 16096 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:03:47.342218 15760 ts_manager.cc:194] Registered new tserver with Master: 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385)
I20250811 02:03:47.343570 15760 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 reported cstate change: term changed from 0 to 2, leader changed from <none> to 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2), VOTER 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) added. New cstate: current_term: 2 leader_uuid: "91bc21b8f774428bae1e2365ab7e1f37" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } health_report { overall_health: HEALTHY } } }
I20250811 02:03:47.356673 15760 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:45829
I20250811 02:03:47.362039 16096 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
I20250811 02:03:47.379328 16047 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: NON_VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: true } }
I20250811 02:03:47.382808 16089 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEADER]: Committing config change with OpId 2.8: config changed from index -1 to 8, NON_VOTER 1eb10bfe655143db90d05241378bac9e (127.12.45.1) added. New config: { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: NON_VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: true } } }
I20250811 02:03:47.392076 15746 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0b62a4d4eed4485aa1f36bc304d94a53 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250811 02:03:47.396703 15759 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 reported cstate change: config changed from index -1 to 8, NON_VOTER 1eb10bfe655143db90d05241378bac9e (127.12.45.1) added. New cstate: current_term: 2 leader_uuid: "91bc21b8f774428bae1e2365ab7e1f37" committed_config { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: NON_VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250811 02:03:47.397951 15979 consensus_peers.cc:489] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 -> Peer 1eb10bfe655143db90d05241378bac9e (127.12.45.1:33479): Couldn't send request to peer 1eb10bfe655143db90d05241378bac9e. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0b62a4d4eed4485aa1f36bc304d94a53. This is attempt 1: this message will repeat every 5th retry.
W20250811 02:03:47.639451 16101 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:47.639912 16101 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:47.640324 16101 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:47.670562 16101 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:47.671445 16101 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:03:47.697352 15902 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: NON_VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: true } }
I20250811 02:03:47.704623 16115 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index -1 to 9, NON_VOTER 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) added. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: NON_VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: true } } }
I20250811 02:03:47.711927 16101 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:36733
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=40649
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:47.713529 16101 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:47.714223 15745 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c646bf4f65cc45208f9880e776286dc1 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 8)
I20250811 02:03:47.715636 16101 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250811 02:03:47.716593 15759 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index -1 to 9, NON_VOTER 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) added. New cstate: current_term: 2 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: NON_VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250811 02:03:47.717622 15835 consensus_peers.cc:489] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e -> Peer 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385): Couldn't send request to peer 91bc21b8f774428bae1e2365ab7e1f37. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: c646bf4f65cc45208f9880e776286dc1. This is attempt 1: this message will repeat every 5th retry.
W20250811 02:03:47.729796 16118 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:48.044368 16125 ts_tablet_manager.cc:927] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Initiating tablet copy from peer 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385)
I20250811 02:03:48.053822 16125 tablet_copy_client.cc:323] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: tablet copy: Beginning tablet copy session from remote peer at address 127.12.45.2:44385
I20250811 02:03:48.056700 16067 tablet_copy_service.cc:140] P 91bc21b8f774428bae1e2365ab7e1f37: Received BeginTabletCopySession request for tablet 0b62a4d4eed4485aa1f36bc304d94a53 from peer 1eb10bfe655143db90d05241378bac9e ({username='slave'} at 127.12.45.1:44967)
I20250811 02:03:48.057601 16067 tablet_copy_service.cc:161] P 91bc21b8f774428bae1e2365ab7e1f37: Beginning new tablet copy session on tablet 0b62a4d4eed4485aa1f36bc304d94a53 from peer 1eb10bfe655143db90d05241378bac9e at {username='slave'} at 127.12.45.1:44967: session id = 1eb10bfe655143db90d05241378bac9e-0b62a4d4eed4485aa1f36bc304d94a53
I20250811 02:03:48.069625 16067 tablet_copy_source_session.cc:215] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 02:03:48.077104 16125 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0b62a4d4eed4485aa1f36bc304d94a53. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:48.102345 16125 tablet_copy_client.cc:806] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: tablet copy: Starting download of 0 data blocks...
I20250811 02:03:48.103358 16125 tablet_copy_client.cc:670] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: tablet copy: Starting download of 1 WAL segments...
I20250811 02:03:48.110682 16125 tablet_copy_client.cc:538] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 02:03:48.123695 16125 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Bootstrap starting.
I20250811 02:03:48.267143 16129 ts_tablet_manager.cc:927] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Initiating tablet copy from peer 1eb10bfe655143db90d05241378bac9e (127.12.45.1:33479)
I20250811 02:03:48.281484 16129 tablet_copy_client.cc:323] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: tablet copy: Beginning tablet copy session from remote peer at address 127.12.45.1:33479
I20250811 02:03:48.284485 15922 tablet_copy_service.cc:140] P 1eb10bfe655143db90d05241378bac9e: Received BeginTabletCopySession request for tablet c646bf4f65cc45208f9880e776286dc1 from peer 91bc21b8f774428bae1e2365ab7e1f37 ({username='slave'} at 127.12.45.2:47419)
I20250811 02:03:48.285362 15922 tablet_copy_service.cc:161] P 1eb10bfe655143db90d05241378bac9e: Beginning new tablet copy session on tablet c646bf4f65cc45208f9880e776286dc1 from peer 91bc21b8f774428bae1e2365ab7e1f37 at {username='slave'} at 127.12.45.2:47419: session id = 91bc21b8f774428bae1e2365ab7e1f37-c646bf4f65cc45208f9880e776286dc1
I20250811 02:03:48.296602 15922 tablet_copy_source_session.cc:215] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 02:03:48.319864 16129 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c646bf4f65cc45208f9880e776286dc1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:48.407047 16129 tablet_copy_client.cc:806] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: tablet copy: Starting download of 0 data blocks...
I20250811 02:03:48.411191 16129 tablet_copy_client.cc:670] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: tablet copy: Starting download of 1 WAL segments...
I20250811 02:03:48.418484 16129 tablet_copy_client.cc:538] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 02:03:48.447536 16129 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap starting.
I20250811 02:03:48.452476 16125 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:48.475345 16125 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Bootstrap complete.
I20250811 02:03:48.476497 16125 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Time spent bootstrapping tablet: real 0.353s user 0.146s sys 0.021s
I20250811 02:03:48.491250 16125 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: NON_VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: true } }
I20250811 02:03:48.492236 16125 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Initialized, Role: LEARNER
I20250811 02:03:48.493227 16125 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: NON_VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: true } }
I20250811 02:03:48.507241 16125 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Time spent starting tablet: real 0.028s user 0.006s sys 0.001s
I20250811 02:03:48.526301 16067 tablet_copy_service.cc:342] P 91bc21b8f774428bae1e2365ab7e1f37: Request end of tablet copy session 1eb10bfe655143db90d05241378bac9e-0b62a4d4eed4485aa1f36bc304d94a53 received from {username='slave'} at 127.12.45.1:44967
I20250811 02:03:48.527258 16067 tablet_copy_service.cc:434] P 91bc21b8f774428bae1e2365ab7e1f37: ending tablet copy session 1eb10bfe655143db90d05241378bac9e-0b62a4d4eed4485aa1f36bc304d94a53 on tablet 0b62a4d4eed4485aa1f36bc304d94a53 with peer 1eb10bfe655143db90d05241378bac9e
I20250811 02:03:48.767351 16129 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:48.768780 16129 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap complete.
I20250811 02:03:48.769727 16129 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent bootstrapping tablet: real 0.323s user 0.170s sys 0.016s
I20250811 02:03:48.773490 16129 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: NON_VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: true } }
I20250811 02:03:48.774515 16129 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Initialized, Role: LEARNER
I20250811 02:03:48.775430 16129 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: NON_VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: true } }
I20250811 02:03:48.796451 16129 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent starting tablet: real 0.026s user 0.008s sys 0.000s
I20250811 02:03:48.808573 15922 tablet_copy_service.cc:342] P 1eb10bfe655143db90d05241378bac9e: Request end of tablet copy session 91bc21b8f774428bae1e2365ab7e1f37-c646bf4f65cc45208f9880e776286dc1 received from {username='slave'} at 127.12.45.2:47419
I20250811 02:03:48.809262 15922 tablet_copy_service.cc:434] P 1eb10bfe655143db90d05241378bac9e: ending tablet copy session 91bc21b8f774428bae1e2365ab7e1f37-c646bf4f65cc45208f9880e776286dc1 on tablet c646bf4f65cc45208f9880e776286dc1 with peer 91bc21b8f774428bae1e2365ab7e1f37
I20250811 02:03:48.816630 16047 raft_consensus.cc:1215] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250811 02:03:48.870850 15902 raft_consensus.cc:1215] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 LEARNER]: Deduplicated request from leader. Original: 2.7->[2.8-2.8] Dedup: 2.8->[]
W20250811 02:03:49.132459 16117 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 16101
W20250811 02:03:49.173718 16101 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.444s user 0.403s sys 0.867s
W20250811 02:03:49.174191 16101 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.445s user 0.403s sys 0.867s
W20250811 02:03:47.730276 16119 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:49.175909 16121 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:49.178376 16120 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1444 milliseconds
I20250811 02:03:49.178428 16101 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:49.179684 16101 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:49.181754 16101 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:49.183166 16101 hybrid_clock.cc:648] HybridClock initialized: now 1754877829183136 us; error 32 us; skew 500 ppm
I20250811 02:03:49.183969 16101 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:49.189958 16101 webserver.cc:489] Webserver started at http://127.12.45.3:40649/ using document root <none> and password file <none>
I20250811 02:03:49.191249 16101 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:49.191478 16101 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:49.199864 16101 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.008s sys 0.000s
I20250811 02:03:49.204625 16139 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:49.205730 16101 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 02:03:49.206054 16101 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "9265cb3403ac47649cd338059475e08d"
format_stamp: "Formatted at 2025-08-11 02:03:33 on dist-test-slave-xn5f"
I20250811 02:03:49.208119 16101 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:49.268802 16101 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:49.270313 16101 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:49.270740 16101 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:49.273190 16101 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:03:49.278707 16146 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 02:03:49.286445 16101 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 02:03:49.286695 16101 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.000s
I20250811 02:03:49.286996 16101 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 02:03:49.288017 16133 raft_consensus.cc:1062] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: attempting to promote NON_VOTER 91bc21b8f774428bae1e2365ab7e1f37 to VOTER
I20250811 02:03:49.289791 16133 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } }
I20250811 02:03:49.293886 16101 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 02:03:49.294100 16101 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.003s sys 0.002s
I20250811 02:03:49.294520 16146 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:03:49.294770 16047 raft_consensus.cc:1273] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEARNER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250811 02:03:49.296672 16128 consensus_queue.cc:1035] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.001s
I20250811 02:03:49.329696 16131 raft_consensus.cc:1062] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: attempting to promote NON_VOTER 1eb10bfe655143db90d05241378bac9e to VOTER
I20250811 02:03:49.332245 16131 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } }
I20250811 02:03:49.344151 16128 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } }
I20250811 02:03:49.349854 16047 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } }
I20250811 02:03:49.351222 15902 raft_consensus.cc:1273] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 LEARNER]: Refusing update from remote peer 91bc21b8f774428bae1e2365ab7e1f37: Log matching property violated. Preceding OpId in replica: term: 2 index: 8. Preceding OpId from leader: term: 2 index: 9. (index mismatch)
I20250811 02:03:49.354228 16124 consensus_queue.cc:1035] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 9, Last known committed idx: 8, Time since last communication: 0.001s
I20250811 02:03:49.356678 15759 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index 9 to 10, 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250811 02:03:49.380025 16124 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index 8 to 9, 1eb10bfe655143db90d05241378bac9e (127.12.45.1) changed from NON_VOTER to VOTER. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } }
I20250811 02:03:49.384752 15901 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Committing config change with OpId 2.9: config changed from index 8 to 9, 1eb10bfe655143db90d05241378bac9e (127.12.45.1) changed from NON_VOTER to VOTER. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } }
I20250811 02:03:49.388815 16146 log.cc:826] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:49.400488 15760 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index 8 to 9, 1eb10bfe655143db90d05241378bac9e (127.12.45.1) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "91bc21b8f774428bae1e2365ab7e1f37" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } }
I20250811 02:03:49.502908 16146 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:49.503744 16146 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap complete.
I20250811 02:03:49.505638 16146 ts_tablet_manager.cc:1397] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.212s user 0.142s sys 0.039s
I20250811 02:03:49.521562 16146 raft_consensus.cc:357] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:49.523573 16146 raft_consensus.cc:738] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: FOLLOWER
I20250811 02:03:49.524380 16146 consensus_queue.cc:260] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:49.525008 16146 raft_consensus.cc:397] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:49.525318 16146 raft_consensus.cc:491] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:49.525734 16146 raft_consensus.cc:3058] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:03:49.531997 16146 raft_consensus.cc:513] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:49.532631 16146 leader_election.cc:304] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9265cb3403ac47649cd338059475e08d; no voters:
I20250811 02:03:49.535197 16146 leader_election.cc:290] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 02:03:49.535593 16254 raft_consensus.cc:2802] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:03:49.539052 16254 raft_consensus.cc:695] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 LEADER]: Becoming Leader. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Running, Role: LEADER
I20250811 02:03:49.540138 16254 consensus_queue.cc:237] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:03:49.541988 16146 ts_tablet_manager.cc:1428] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.036s user 0.033s sys 0.004s
I20250811 02:03:49.549629 16101 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:36733
I20250811 02:03:49.550256 16266 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:36733 every 8 connection(s)
I20250811 02:03:49.552259 16101 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:03:49.561065 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 16101
I20250811 02:03:49.575359 16267 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:03:49.575882 16267 heartbeater.cc:461] Registering TS with master...
I20250811 02:03:49.577065 16267 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:03:49.580588 15760 ts_manager.cc:194] Registered new tserver with Master: 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
I20250811 02:03:49.581518 15760 catalog_manager.cc:5582] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "9265cb3403ac47649cd338059475e08d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } health_report { overall_health: HEALTHY } } }
I20250811 02:03:49.587836 15760 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:51013
I20250811 02:03:49.590938 16267 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
I20250811 02:03:49.591970 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:03:49.596340 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250811 02:03:49.599354 12468 ts_itest-base.cc:209] found only 2 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER } interned_replicas { ts_info_idx: 1 role: FOLLOWER }
I20250811 02:03:49.606285 16047 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } }
I20250811 02:03:49.610515 15901 raft_consensus.cc:1273] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Refusing update from remote peer 91bc21b8f774428bae1e2365ab7e1f37: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250811 02:03:49.611819 16124 consensus_queue.cc:1035] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250811 02:03:49.616894 16131 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } } }
I20250811 02:03:49.618361 15901 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } } }
W20250811 02:03:49.624176 15981 consensus_peers.cc:489] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 -> Peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Couldn't send request to peer 9265cb3403ac47649cd338059475e08d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0b62a4d4eed4485aa1f36bc304d94a53. This is attempt 1: this message will repeat every 5th retry.
I20250811 02:03:49.624420 15746 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0b62a4d4eed4485aa1f36bc304d94a53 with cas_config_opid_index 9: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 4)
I20250811 02:03:49.626467 15746 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0b62a4d4eed4485aa1f36bc304d94a53 with cas_config_opid_index 8: aborting the task: latest config opid_index 9; task opid_index 8
I20250811 02:03:49.628355 15759 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 reported cstate change: config changed from index 9 to 10, NON_VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New cstate: current_term: 2 leader_uuid: "91bc21b8f774428bae1e2365ab7e1f37" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250811 02:03:49.709604 15901 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } }
I20250811 02:03:49.713894 16047 raft_consensus.cc:1273] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250811 02:03:49.715142 16133 consensus_queue.cc:1035] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250811 02:03:49.720692 16133 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, NON_VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } } }
I20250811 02:03:49.721856 16047 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, NON_VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } } }
W20250811 02:03:49.722424 15836 consensus_peers.cc:489] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e -> Peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Couldn't send request to peer 9265cb3403ac47649cd338059475e08d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: c646bf4f65cc45208f9880e776286dc1. This is attempt 1: this message will repeat every 5th retry.
I20250811 02:03:49.726840 15745 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c646bf4f65cc45208f9880e776286dc1 with cas_config_opid_index 10: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 5)
I20250811 02:03:49.730060 15760 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index 10 to 11, NON_VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New cstate: current_term: 2 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250811 02:03:49.934505 15746 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c646bf4f65cc45208f9880e776286dc1 with cas_config_opid_index 9: aborting the task: latest config opid_index 11; task opid_index 9
I20250811 02:03:50.005923 16276 ts_tablet_manager.cc:927] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Initiating tablet copy from peer 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385)
I20250811 02:03:50.007862 16276 tablet_copy_client.cc:323] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: tablet copy: Beginning tablet copy session from remote peer at address 127.12.45.2:44385
I20250811 02:03:50.016824 16067 tablet_copy_service.cc:140] P 91bc21b8f774428bae1e2365ab7e1f37: Received BeginTabletCopySession request for tablet 0b62a4d4eed4485aa1f36bc304d94a53 from peer 9265cb3403ac47649cd338059475e08d ({username='slave'} at 127.12.45.3:46381)
I20250811 02:03:50.017216 16067 tablet_copy_service.cc:161] P 91bc21b8f774428bae1e2365ab7e1f37: Beginning new tablet copy session on tablet 0b62a4d4eed4485aa1f36bc304d94a53 from peer 9265cb3403ac47649cd338059475e08d at {username='slave'} at 127.12.45.3:46381: session id = 9265cb3403ac47649cd338059475e08d-0b62a4d4eed4485aa1f36bc304d94a53
I20250811 02:03:50.021066 16067 tablet_copy_source_session.cc:215] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 02:03:50.023818 16276 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0b62a4d4eed4485aa1f36bc304d94a53. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:50.035823 16276 tablet_copy_client.cc:806] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: tablet copy: Starting download of 0 data blocks...
I20250811 02:03:50.036291 16276 tablet_copy_client.cc:670] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: tablet copy: Starting download of 1 WAL segments...
I20250811 02:03:50.039479 16276 tablet_copy_client.cc:538] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 02:03:50.044595 16276 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:03:50.111012 16276 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:50.111706 16276 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Bootstrap complete.
I20250811 02:03:50.112105 16276 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.068s user 0.062s sys 0.006s
I20250811 02:03:50.113729 16276 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } }
I20250811 02:03:50.114253 16276 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: LEARNER
I20250811 02:03:50.114666 16276 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } }
I20250811 02:03:50.116978 16276 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.005s user 0.005s sys 0.000s
I20250811 02:03:50.118476 16067 tablet_copy_service.cc:342] P 91bc21b8f774428bae1e2365ab7e1f37: Request end of tablet copy session 9265cb3403ac47649cd338059475e08d-0b62a4d4eed4485aa1f36bc304d94a53 received from {username='slave'} at 127.12.45.3:46381
I20250811 02:03:50.118767 16067 tablet_copy_service.cc:434] P 91bc21b8f774428bae1e2365ab7e1f37: ending tablet copy session 9265cb3403ac47649cd338059475e08d-0b62a4d4eed4485aa1f36bc304d94a53 on tablet 0b62a4d4eed4485aa1f36bc304d94a53 with peer 9265cb3403ac47649cd338059475e08d
I20250811 02:03:50.154848 16276 ts_tablet_manager.cc:927] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Initiating tablet copy from peer 1eb10bfe655143db90d05241378bac9e (127.12.45.1:33479)
I20250811 02:03:50.156394 16276 tablet_copy_client.cc:323] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: tablet copy: Beginning tablet copy session from remote peer at address 127.12.45.1:33479
I20250811 02:03:50.164774 15922 tablet_copy_service.cc:140] P 1eb10bfe655143db90d05241378bac9e: Received BeginTabletCopySession request for tablet c646bf4f65cc45208f9880e776286dc1 from peer 9265cb3403ac47649cd338059475e08d ({username='slave'} at 127.12.45.3:55597)
I20250811 02:03:50.165122 15922 tablet_copy_service.cc:161] P 1eb10bfe655143db90d05241378bac9e: Beginning new tablet copy session on tablet c646bf4f65cc45208f9880e776286dc1 from peer 9265cb3403ac47649cd338059475e08d at {username='slave'} at 127.12.45.3:55597: session id = 9265cb3403ac47649cd338059475e08d-c646bf4f65cc45208f9880e776286dc1
I20250811 02:03:50.169143 15922 tablet_copy_source_session.cc:215] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 02:03:50.171180 16276 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c646bf4f65cc45208f9880e776286dc1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:03:50.179167 16276 tablet_copy_client.cc:806] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: tablet copy: Starting download of 0 data blocks...
I20250811 02:03:50.179507 16276 tablet_copy_client.cc:670] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: tablet copy: Starting download of 1 WAL segments...
I20250811 02:03:50.182426 16276 tablet_copy_client.cc:538] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 02:03:50.186273 16276 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:03:50.253485 16276 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:50.254066 16276 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Bootstrap complete.
I20250811 02:03:50.254470 16276 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.068s user 0.068s sys 0.000s
I20250811 02:03:50.255910 16276 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } }
I20250811 02:03:50.256356 16276 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: LEARNER
I20250811 02:03:50.256754 16276 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: NON_VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: true } }
I20250811 02:03:50.258239 16276 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250811 02:03:50.259785 15922 tablet_copy_service.cc:342] P 1eb10bfe655143db90d05241378bac9e: Request end of tablet copy session 9265cb3403ac47649cd338059475e08d-c646bf4f65cc45208f9880e776286dc1 received from {username='slave'} at 127.12.45.3:55597
I20250811 02:03:50.260192 15922 tablet_copy_service.cc:434] P 1eb10bfe655143db90d05241378bac9e: ending tablet copy session 9265cb3403ac47649cd338059475e08d-c646bf4f65cc45208f9880e776286dc1 on tablet c646bf4f65cc45208f9880e776286dc1 with peer 9265cb3403ac47649cd338059475e08d
I20250811 02:03:50.475804 16217 raft_consensus.cc:1215] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
I20250811 02:03:50.549930 16217 raft_consensus.cc:1215] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.10->[2.11-2.11] Dedup: 2.11->[]
I20250811 02:03:50.603672 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1eb10bfe655143db90d05241378bac9e to finish bootstrapping
I20250811 02:03:50.620766 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 91bc21b8f774428bae1e2365ab7e1f37 to finish bootstrapping
I20250811 02:03:50.639312 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 9265cb3403ac47649cd338059475e08d to finish bootstrapping
I20250811 02:03:50.942305 16027 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:03:50.943706 15882 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:03:50.952345 16197 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:03:50.998024 16285 raft_consensus.cc:1062] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: attempting to promote NON_VOTER 9265cb3403ac47649cd338059475e08d to VOTER
I20250811 02:03:51.001230 16285 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:03:51.016714 16217 raft_consensus.cc:1273] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Refusing update from remote peer 91bc21b8f774428bae1e2365ab7e1f37: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250811 02:03:51.018584 16317 consensus_queue.cc:1035] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
I20250811 02:03:51.027676 15901 raft_consensus.cc:1273] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Refusing update from remote peer 91bc21b8f774428bae1e2365ab7e1f37: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250811 02:03:51.029322 16317 consensus_queue.cc:1035] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250811 02:03:51.038784 16217 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
I20250811 02:03:51.039806 16320 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
I20250811 02:03:51.062646 15758 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 reported cstate change: config changed from index 10 to 11, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "91bc21b8f774428bae1e2365ab7e1f37" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250811 02:03:51.080698 15901 raft_consensus.cc:2953] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
I20250811 02:03:51.162400 16287 raft_consensus.cc:1062] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: attempting to promote NON_VOTER 9265cb3403ac47649cd338059475e08d to VOTER
I20250811 02:03:51.164510 16287 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:03:51.171201 16217 raft_consensus.cc:1273] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 LEARNER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250811 02:03:51.172308 16047 raft_consensus.cc:1273] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250811 02:03:51.172434 16287 consensus_queue.cc:1035] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
I20250811 02:03:51.173847 16327 consensus_queue.cc:1035] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250811 02:03:51.181461 16217 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
I20250811 02:03:51.181730 16047 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
I20250811 02:03:51.179984 16287 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
I20250811 02:03:51.192056 15759 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 reported cstate change: config changed from index 11 to 12, 9265cb3403ac47649cd338059475e08d (127.12.45.3) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
Master Summary
UUID | Address | Status
----------------------------------+--------------------+---------
486f1497202943c283c8305e5ca9a2e7 | 127.12.45.62:38233 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.12.45.20:45821 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
1eb10bfe655143db90d05241378bac9e | 127.12.45.1:33479 | HEALTHY | <none> | 1 | 0
91bc21b8f774428bae1e2365ab7e1f37 | 127.12.45.2:44385 | HEALTHY | <none> | 1 | 0
9265cb3403ac47649cd338059475e08d | 127.12.45.3:36733 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.12.45.1 | experimental | 127.12.45.1:33479
local_ip_for_outbound_sockets | 127.12.45.2 | experimental | 127.12.45.2:44385
local_ip_for_outbound_sockets | 127.12.45.3 | experimental | 127.12.45.3:36733
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.12.45.1:33479
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.12.45.2:44385
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.12.45.3:36733
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.12.45.20:45821 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250811 02:03:51.211710 12468 log_verifier.cc:126] Checking tablet 0b62a4d4eed4485aa1f36bc304d94a53
I20250811 02:03:51.310005 12468 log_verifier.cc:177] Verified matching terms for 11 ops in tablet 0b62a4d4eed4485aa1f36bc304d94a53
I20250811 02:03:51.310348 12468 log_verifier.cc:126] Checking tablet bf8ce350bb0d4d84a7bd8dd00558a9b8
I20250811 02:03:51.336614 12468 log_verifier.cc:177] Verified matching terms for 8 ops in tablet bf8ce350bb0d4d84a7bd8dd00558a9b8
I20250811 02:03:51.336846 12468 log_verifier.cc:126] Checking tablet c646bf4f65cc45208f9880e776286dc1
I20250811 02:03:51.415079 12468 log_verifier.cc:177] Verified matching terms for 12 ops in tablet c646bf4f65cc45208f9880e776286dc1
I20250811 02:03:51.415532 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15727
I20250811 02:03:51.441136 12468 minidump.cc:252] Setting minidump size limit to 20M
I20250811 02:03:51.442531 12468 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:51.443881 12468 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:51.455142 16333 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:51.538182 16096 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:38233 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:38233: connect: Connection refused (error 111)
W20250811 02:03:51.456722 16336 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:51.455219 16334 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:51.562786 12468 server_base.cc:1047] running on GCE node
I20250811 02:03:51.564102 12468 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250811 02:03:51.564309 12468 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250811 02:03:51.564447 12468 hybrid_clock.cc:648] HybridClock initialized: now 1754877831564433 us; error 0 us; skew 500 ppm
I20250811 02:03:51.565032 12468 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:51.568727 12468 webserver.cc:489] Webserver started at http://0.0.0.0:38399/ using document root <none> and password file <none>
I20250811 02:03:51.569597 12468 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:51.569820 12468 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:51.576122 12468 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.002s sys 0.002s
I20250811 02:03:51.579938 16342 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:51.580911 12468 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 02:03:51.581238 12468 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "486f1497202943c283c8305e5ca9a2e7"
format_stamp: "Formatted at 2025-08-11 02:03:28 on dist-test-slave-xn5f"
I20250811 02:03:51.583118 12468 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:51.609661 12468 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:51.611078 12468 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:51.611510 12468 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:51.620708 12468 sys_catalog.cc:263] Verifying existing consensus state
W20250811 02:03:51.624176 12468 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.12.45.62:38233 exists but no master address supplied!
I20250811 02:03:51.625993 12468 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap starting.
I20250811 02:03:51.665102 12468 log.cc:826] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:51.723891 12468 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap replayed 1/1 log segments. Stats: ops{read=30 overwritten=0 applied=30 ignored=0} inserts{seen=13 ignored=0} mutations{seen=21 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:51.724673 12468 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap complete.
I20250811 02:03:51.737468 12468 raft_consensus.cc:357] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:51.738003 12468 raft_consensus.cc:738] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Initialized, Role: FOLLOWER
I20250811 02:03:51.738626 12468 consensus_queue.cc:260] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:51.739133 12468 raft_consensus.cc:397] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:51.739352 12468 raft_consensus.cc:491] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:51.739622 12468 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 3 FOLLOWER]: Advancing to term 4
I20250811 02:03:51.744710 12468 raft_consensus.cc:513] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:51.745332 12468 leader_election.cc:304] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 486f1497202943c283c8305e5ca9a2e7; no voters:
I20250811 02:03:51.746443 12468 leader_election.cc:290] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 4 election: Requested vote from peers
I20250811 02:03:51.746712 16349 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 4 FOLLOWER]: Leader election won for term 4
I20250811 02:03:51.748107 16349 raft_consensus.cc:695] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 4 LEADER]: Becoming Leader. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Running, Role: LEADER
I20250811 02:03:51.748875 16349 consensus_queue.cc:237] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 30, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:51.755919 16351 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 486f1497202943c283c8305e5ca9a2e7. Latest consensus state: current_term: 4 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:51.756578 16351 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:51.756551 16350 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:51.757104 16350 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:51.782541 12468 tablet_replica.cc:331] stopping tablet replica
I20250811 02:03:51.783226 12468 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 4 LEADER]: Raft consensus shutting down.
I20250811 02:03:51.783767 12468 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 4 FOLLOWER]: Raft consensus is shut down!
I20250811 02:03:51.786638 12468 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250811 02:03:51.787175 12468 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250811 02:03:51.811663 12468 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
W20250811 02:03:52.233857 15947 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:38233 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:38233: connect: Connection refused (error 111)
W20250811 02:03:52.238278 16267 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:38233 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:38233: connect: Connection refused (error 111)
I20250811 02:03:56.853322 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15797
I20250811 02:03:56.879673 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 15951
I20250811 02:03:56.909118 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 16101
I20250811 02:03:56.938367 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:38233
--webserver_interface=127.12.45.62
--webserver_port=37139
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:38233 with env {}
W20250811 02:03:57.247864 16425 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:57.248543 16425 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:57.248972 16425 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:57.279150 16425 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:03:57.279505 16425 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:57.279763 16425 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:03:57.280004 16425 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:03:57.315094 16425 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:38233
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:38233
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=37139
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:57.316649 16425 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:57.318403 16425 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:57.329375 16431 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:57.329967 16432 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:03:58.498347 16425 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.169s user 0.371s sys 0.791s
W20250811 02:03:58.498515 16433 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1166 milliseconds
W20250811 02:03:58.498852 16425 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.169s user 0.373s sys 0.793s
W20250811 02:03:58.499204 16434 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:03:58.499239 16425 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:03:58.500578 16425 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:03:58.503500 16425 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:03:58.504904 16425 hybrid_clock.cc:648] HybridClock initialized: now 1754877838504875 us; error 40 us; skew 500 ppm
I20250811 02:03:58.505749 16425 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:03:58.513703 16425 webserver.cc:489] Webserver started at http://127.12.45.62:37139/ using document root <none> and password file <none>
I20250811 02:03:58.514729 16425 fs_manager.cc:362] Metadata directory not provided
I20250811 02:03:58.515003 16425 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:03:58.523936 16425 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.004s sys 0.004s
I20250811 02:03:58.529312 16441 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:03:58.530565 16425 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.003s
I20250811 02:03:58.530890 16425 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "486f1497202943c283c8305e5ca9a2e7"
format_stamp: "Formatted at 2025-08-11 02:03:28 on dist-test-slave-xn5f"
I20250811 02:03:58.532990 16425 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:03:58.601137 16425 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:03:58.602663 16425 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:03:58.603156 16425 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:03:58.677436 16425 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:38233
I20250811 02:03:58.677510 16492 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:38233 every 8 connection(s)
I20250811 02:03:58.680541 16425 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:03:58.687096 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 16425
I20250811 02:03:58.689258 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:33479
--local_ip_for_outbound_sockets=127.12.45.1
--tserver_master_addrs=127.12.45.62:38233
--webserver_port=33597
--webserver_interface=127.12.45.1
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:03:58.694186 16493 sys_catalog.cc:263] Verifying existing consensus state
I20250811 02:03:58.701781 16493 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap starting.
I20250811 02:03:58.712582 16493 log.cc:826] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Log is configured to *not* fsync() on all Append() calls
I20250811 02:03:58.793339 16493 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap replayed 1/1 log segments. Stats: ops{read=34 overwritten=0 applied=34 ignored=0} inserts{seen=15 ignored=0} mutations{seen=23 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:03:58.794142 16493 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Bootstrap complete.
I20250811 02:03:58.813376 16493 raft_consensus.cc:357] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:58.815693 16493 raft_consensus.cc:738] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Initialized, Role: FOLLOWER
I20250811 02:03:58.816552 16493 consensus_queue.cc:260] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:58.817026 16493 raft_consensus.cc:397] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:03:58.817313 16493 raft_consensus.cc:491] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:03:58.817592 16493 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 5 FOLLOWER]: Advancing to term 6
I20250811 02:03:58.822952 16493 raft_consensus.cc:513] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:58.823743 16493 leader_election.cc:304] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 486f1497202943c283c8305e5ca9a2e7; no voters:
I20250811 02:03:58.825975 16493 leader_election.cc:290] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [CANDIDATE]: Term 6 election: Requested vote from peers
I20250811 02:03:58.826418 16497 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 6 FOLLOWER]: Leader election won for term 6
I20250811 02:03:58.829885 16497 raft_consensus.cc:695] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [term 6 LEADER]: Becoming Leader. State: Replica: 486f1497202943c283c8305e5ca9a2e7, State: Running, Role: LEADER
I20250811 02:03:58.830725 16497 consensus_queue.cc:237] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 34, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } }
I20250811 02:03:58.831413 16493 sys_catalog.cc:564] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:03:58.838735 16499 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 486f1497202943c283c8305e5ca9a2e7. Latest consensus state: current_term: 6 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:58.838862 16498 sys_catalog.cc:455] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "486f1497202943c283c8305e5ca9a2e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "486f1497202943c283c8305e5ca9a2e7" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 38233 } } }
I20250811 02:03:58.839701 16499 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:58.839730 16498 sys_catalog.cc:458] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7 [sys.catalog]: This master's current role is: LEADER
I20250811 02:03:58.850029 16503 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:03:58.867893 16503 catalog_manager.cc:671] Loaded metadata for table TestTable [id=5931c8d0003c4794b1f081c526bedf62]
I20250811 02:03:58.869722 16503 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=5a733dd1d69f4bc7a6ae0ec2d1c00eb6]
I20250811 02:03:58.872076 16503 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=c5231d6bd5a64ab2aa645cb49916cb66]
I20250811 02:03:58.884019 16503 tablet_loader.cc:96] loaded metadata for tablet 0b62a4d4eed4485aa1f36bc304d94a53 (table TestTable1 [id=5a733dd1d69f4bc7a6ae0ec2d1c00eb6])
I20250811 02:03:58.885875 16503 tablet_loader.cc:96] loaded metadata for tablet bf8ce350bb0d4d84a7bd8dd00558a9b8 (table TestTable2 [id=c5231d6bd5a64ab2aa645cb49916cb66])
I20250811 02:03:58.887601 16503 tablet_loader.cc:96] loaded metadata for tablet c646bf4f65cc45208f9880e776286dc1 (table TestTable [id=5931c8d0003c4794b1f081c526bedf62])
I20250811 02:03:58.889569 16503 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:03:58.897022 16503 catalog_manager.cc:1261] Loaded cluster ID: e0f8de29cc554c7283e85520427607c9
I20250811 02:03:58.897533 16503 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:03:58.909416 16503 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:03:58.917248 16503 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 486f1497202943c283c8305e5ca9a2e7: Loaded TSK: 0
I20250811 02:03:58.919605 16503 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250811 02:03:59.059530 16495 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:03:59.060091 16495 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:03:59.060614 16495 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:03:59.092968 16495 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:03:59.093871 16495 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:03:59.130111 16495 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:33479
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=33597
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:03:59.131639 16495 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:03:59.133415 16495 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:03:59.146598 16520 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:00.550755 16519 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 16495
W20250811 02:04:00.999192 16495 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.852s user 0.660s sys 1.131s
W20250811 02:04:01.000025 16495 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.853s user 0.660s sys 1.131s
W20250811 02:03:59.149398 16521 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:01.001513 16522 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1852 milliseconds
W20250811 02:04:01.002506 16523 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:01.002446 16495 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:01.005734 16495 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:01.008334 16495 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:01.009699 16495 hybrid_clock.cc:648] HybridClock initialized: now 1754877841009658 us; error 41 us; skew 500 ppm
I20250811 02:04:01.010452 16495 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:01.016933 16495 webserver.cc:489] Webserver started at http://127.12.45.1:33597/ using document root <none> and password file <none>
I20250811 02:04:01.018019 16495 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:01.018265 16495 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:01.026616 16495 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.000s
I20250811 02:04:01.032042 16530 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:01.033308 16495 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250811 02:04:01.033625 16495 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "1eb10bfe655143db90d05241378bac9e"
format_stamp: "Formatted at 2025-08-11 02:03:30 on dist-test-slave-xn5f"
I20250811 02:04:01.035648 16495 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:01.105218 16495 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:01.106812 16495 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:01.107329 16495 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:01.110495 16495 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:01.117022 16537 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250811 02:04:01.129092 16495 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250811 02:04:01.129356 16495 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.014s user 0.003s sys 0.000s
I20250811 02:04:01.129650 16495 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250811 02:04:01.135080 16537 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Bootstrap starting.
I20250811 02:04:01.139941 16495 ts_tablet_manager.cc:610] Registered 2 tablets
I20250811 02:04:01.140256 16495 ts_tablet_manager.cc:589] Time spent register tablets: real 0.011s user 0.010s sys 0.000s
I20250811 02:04:01.212118 16537 log.cc:826] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:01.339574 16537 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:01.340477 16495 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:33479
I20250811 02:04:01.340606 16644 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:33479 every 8 connection(s)
I20250811 02:04:01.340763 16537 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Bootstrap complete.
I20250811 02:04:01.342793 16537 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Time spent bootstrapping tablet: real 0.208s user 0.151s sys 0.053s
I20250811 02:04:01.343309 16495 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:04:01.345685 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 16495
I20250811 02:04:01.348176 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:44385
--local_ip_for_outbound_sockets=127.12.45.2
--tserver_master_addrs=127.12.45.62:38233
--webserver_port=42569
--webserver_interface=127.12.45.2
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:04:01.376818 16537 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:01.380167 16537 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Initialized, Role: FOLLOWER
I20250811 02:04:01.381146 16537 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:01.389005 16537 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e: Time spent starting tablet: real 0.046s user 0.024s sys 0.009s
I20250811 02:04:01.389813 16537 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap starting.
I20250811 02:04:01.436275 16645 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:04:01.436797 16645 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:01.438066 16645 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:04:01.443331 16458 ts_manager.cc:194] Registered new tserver with Master: 1eb10bfe655143db90d05241378bac9e (127.12.45.1:33479)
I20250811 02:04:01.448513 16458 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index -1 to 12, term changed from 0 to 2, VOTER 1eb10bfe655143db90d05241378bac9e (127.12.45.1) added, VOTER 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) added, VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) added. New cstate: current_term: 2 committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } } }
W20250811 02:04:01.468358 16489 debug-util.cc:398] Leaking SignalData structure 0x7b080006f260 after lost signal to thread 16426
I20250811 02:04:01.530326 16458 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:35033
I20250811 02:04:01.535908 16645 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
I20250811 02:04:01.555116 16537 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:01.556161 16537 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Bootstrap complete.
I20250811 02:04:01.557755 16537 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Time spent bootstrapping tablet: real 0.168s user 0.140s sys 0.020s
I20250811 02:04:01.560106 16537 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:01.560823 16537 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Initialized, Role: FOLLOWER
I20250811 02:04:01.561553 16537 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:01.565994 16537 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e: Time spent starting tablet: real 0.008s user 0.004s sys 0.000s
W20250811 02:04:01.863487 16646 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:01.864167 16646 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:01.865002 16646 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:01.928942 16646 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:01.930442 16646 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:04:01.999984 16646 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:44385
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=42569
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:02.001368 16646 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:02.003031 16646 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:02.015843 16661 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:02.873484 16667 raft_consensus.cc:491] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:04:02.874609 16667 raft_consensus.cc:513] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:02.895907 16667 leader_election.cc:290] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385), 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
W20250811 02:04:02.898490 16534 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
W20250811 02:04:02.923031 16533 leader_election.cc:336] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385): Network error: Client connection negotiation failed: client connection to 127.12.45.2:44385: connect: Connection refused (error 111)
W20250811 02:04:02.930456 16534 leader_election.cc:336] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
I20250811 02:04:02.931136 16534 leader_election.cc:304] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e; no voters: 91bc21b8f774428bae1e2365ab7e1f37, 9265cb3403ac47649cd338059475e08d
I20250811 02:04:02.932273 16667 raft_consensus.cc:2747] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250811 02:04:02.958163 16667 raft_consensus.cc:491] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:04:02.958830 16667 raft_consensus.cc:513] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:02.961344 16667 leader_election.cc:290] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385), 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
W20250811 02:04:02.968801 16534 leader_election.cc:336] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
W20250811 02:04:02.971408 16533 leader_election.cc:336] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385): Network error: Client connection negotiation failed: client connection to 127.12.45.2:44385: connect: Connection refused (error 111)
I20250811 02:04:02.972013 16533 leader_election.cc:304] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e; no voters: 91bc21b8f774428bae1e2365ab7e1f37, 9265cb3403ac47649cd338059475e08d
I20250811 02:04:02.973464 16667 raft_consensus.cc:2747] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
W20250811 02:04:03.419710 16660 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 16646
W20250811 02:04:03.859617 16660 kernel_stack_watchdog.cc:198] Thread 16646 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:04:03.859858 16646 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.843s user 0.691s sys 1.105s
W20250811 02:04:03.860234 16646 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.844s user 0.692s sys 1.105s
W20250811 02:04:02.017313 16662 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:03.862473 16664 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:03.868772 16663 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1851 milliseconds
I20250811 02:04:03.868803 16646 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:03.870494 16646 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:03.873245 16646 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:03.874825 16646 hybrid_clock.cc:648] HybridClock initialized: now 1754877843874732 us; error 93 us; skew 500 ppm
I20250811 02:04:03.875941 16646 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:03.883550 16646 webserver.cc:489] Webserver started at http://127.12.45.2:42569/ using document root <none> and password file <none>
I20250811 02:04:03.884900 16646 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:03.885198 16646 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:03.896597 16646 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.000s sys 0.008s
I20250811 02:04:03.902678 16675 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:03.903908 16646 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 02:04:03.904331 16646 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "91bc21b8f774428bae1e2365ab7e1f37"
format_stamp: "Formatted at 2025-08-11 02:03:31 on dist-test-slave-xn5f"
I20250811 02:04:03.907251 16646 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:03.979112 16646 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:03.981168 16646 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:03.981736 16646 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:03.985107 16646 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:03.993029 16682 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250811 02:04:04.004325 16646 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250811 02:04:04.004658 16646 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.002s sys 0.000s
I20250811 02:04:04.004997 16646 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250811 02:04:04.013408 16682 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap starting.
I20250811 02:04:04.018602 16646 ts_tablet_manager.cc:610] Registered 2 tablets
I20250811 02:04:04.018913 16646 ts_tablet_manager.cc:589] Time spent register tablets: real 0.014s user 0.011s sys 0.000s
I20250811 02:04:04.072881 16682 log.cc:826] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:04.195420 16682 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:04.196314 16682 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap complete.
I20250811 02:04:04.198318 16682 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent bootstrapping tablet: real 0.185s user 0.135s sys 0.045s
I20250811 02:04:04.201619 16646 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:44385
I20250811 02:04:04.201749 16789 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:44385 every 8 connection(s)
I20250811 02:04:04.205617 16646 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:04:04.208925 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 16646
I20250811 02:04:04.210824 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:36733
--local_ip_for_outbound_sockets=127.12.45.3
--tserver_master_addrs=127.12.45.62:38233
--webserver_port=40649
--webserver_interface=127.12.45.3
--builtin_ntp_servers=127.12.45.20:45821
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 02:04:04.217888 16682 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.221277 16682 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Initialized, Role: FOLLOWER
I20250811 02:04:04.222245 16682 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.233593 16682 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent starting tablet: real 0.035s user 0.022s sys 0.004s
I20250811 02:04:04.234440 16682 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap starting.
I20250811 02:04:04.255970 16790 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:04:04.256472 16790 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:04.257781 16790 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:04:04.263335 16458 ts_manager.cc:194] Registered new tserver with Master: 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385)
I20250811 02:04:04.268146 16458 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:41675
I20250811 02:04:04.272471 16790 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
I20250811 02:04:04.393579 16682 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:04.394500 16682 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Bootstrap complete.
I20250811 02:04:04.396013 16682 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent bootstrapping tablet: real 0.162s user 0.128s sys 0.026s
I20250811 02:04:04.398310 16682 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.399012 16682 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 91bc21b8f774428bae1e2365ab7e1f37, State: Initialized, Role: FOLLOWER
I20250811 02:04:04.399645 16682 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.401598 16682 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
W20250811 02:04:04.593412 16795 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:04.593883 16795 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:04.594331 16795 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:04.625833 16795 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:04.626669 16795 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:04:04.662047 16795 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45821
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:36733
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=40649
--tserver_master_addrs=127.12.45.62:38233
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:04.663539 16795 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:04.665174 16795 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:04.677099 16802 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:04.786973 16808 raft_consensus.cc:491] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:04:04.787747 16808 raft_consensus.cc:513] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.804952 16808 leader_election.cc:290] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385), 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
W20250811 02:04:04.820240 16534 leader_election.cc:336] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
I20250811 02:04:04.828498 16745 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c646bf4f65cc45208f9880e776286dc1" candidate_uuid: "1eb10bfe655143db90d05241378bac9e" candidate_term: 3 candidate_status { last_received { term: 2 index: 12 } } ignore_live_leader: false dest_uuid: "91bc21b8f774428bae1e2365ab7e1f37" is_pre_election: true
I20250811 02:04:04.829869 16745 raft_consensus.cc:2466] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 1eb10bfe655143db90d05241378bac9e in term 2.
I20250811 02:04:04.832401 16533 leader_election.cc:304] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e, 91bc21b8f774428bae1e2365ab7e1f37; no voters: 9265cb3403ac47649cd338059475e08d
I20250811 02:04:04.833693 16808 raft_consensus.cc:2802] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250811 02:04:04.834249 16808 raft_consensus.cc:491] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:04:04.834743 16808 raft_consensus.cc:3058] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Advancing to term 3
I20250811 02:04:04.850421 16808 raft_consensus.cc:513] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 3 FOLLOWER]: Starting leader election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.855038 16745 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c646bf4f65cc45208f9880e776286dc1" candidate_uuid: "1eb10bfe655143db90d05241378bac9e" candidate_term: 3 candidate_status { last_received { term: 2 index: 12 } } ignore_live_leader: false dest_uuid: "91bc21b8f774428bae1e2365ab7e1f37"
I20250811 02:04:04.855916 16745 raft_consensus.cc:3058] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Advancing to term 3
I20250811 02:04:04.859540 16808 leader_election.cc:290] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 election: Requested vote from peers 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385), 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
W20250811 02:04:04.863866 16534 leader_election.cc:336] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
I20250811 02:04:04.872973 16745 raft_consensus.cc:2466] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1eb10bfe655143db90d05241378bac9e in term 3.
I20250811 02:04:04.874696 16533 leader_election.cc:304] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e, 91bc21b8f774428bae1e2365ab7e1f37; no voters: 9265cb3403ac47649cd338059475e08d
I20250811 02:04:04.875823 16808 raft_consensus.cc:2802] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 3 FOLLOWER]: Leader election won for term 3
I20250811 02:04:04.881609 16808 raft_consensus.cc:695] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 3 LEADER]: Becoming Leader. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Running, Role: LEADER
I20250811 02:04:04.883001 16808 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:04.908689 16458 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: term changed from 2 to 3, leader changed from <none> to 1eb10bfe655143db90d05241378bac9e (127.12.45.1). New cstate: current_term: 3 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250811 02:04:05.039594 16808 raft_consensus.cc:491] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:04:05.040314 16808 raft_consensus.cc:513] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:05.046247 16745 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0b62a4d4eed4485aa1f36bc304d94a53" candidate_uuid: "1eb10bfe655143db90d05241378bac9e" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "91bc21b8f774428bae1e2365ab7e1f37" is_pre_election: true
I20250811 02:04:05.047338 16745 raft_consensus.cc:2466] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 1eb10bfe655143db90d05241378bac9e in term 2.
I20250811 02:04:05.049150 16533 leader_election.cc:304] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e, 91bc21b8f774428bae1e2365ab7e1f37; no voters:
I20250811 02:04:05.050537 16813 raft_consensus.cc:2802] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250811 02:04:05.051077 16813 raft_consensus.cc:491] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:04:05.051563 16813 raft_consensus.cc:3058] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 2 FOLLOWER]: Advancing to term 3
W20250811 02:04:05.063573 16534 leader_election.cc:336] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
I20250811 02:04:05.064253 16808 leader_election.cc:290] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385), 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
I20250811 02:04:05.074208 16813 raft_consensus.cc:513] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 3 FOLLOWER]: Starting leader election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:05.078742 16745 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0b62a4d4eed4485aa1f36bc304d94a53" candidate_uuid: "1eb10bfe655143db90d05241378bac9e" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "91bc21b8f774428bae1e2365ab7e1f37"
I20250811 02:04:05.079660 16745 raft_consensus.cc:3058] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 2 FOLLOWER]: Advancing to term 3
W20250811 02:04:05.098428 16534 leader_election.cc:336] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111)
I20250811 02:04:05.099555 16813 leader_election.cc:290] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 election: Requested vote from peers 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385), 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
I20250811 02:04:05.100944 16745 raft_consensus.cc:2466] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1eb10bfe655143db90d05241378bac9e in term 3.
I20250811 02:04:05.102670 16533 leader_election.cc:304] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1eb10bfe655143db90d05241378bac9e, 91bc21b8f774428bae1e2365ab7e1f37; no voters: 9265cb3403ac47649cd338059475e08d
I20250811 02:04:05.104301 16813 raft_consensus.cc:2802] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 3 FOLLOWER]: Leader election won for term 3
I20250811 02:04:05.105989 16813 raft_consensus.cc:695] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [term 3 LEADER]: Becoming Leader. State: Replica: 1eb10bfe655143db90d05241378bac9e, State: Running, Role: LEADER
I20250811 02:04:05.107805 16813 consensus_queue.cc:237] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:05.129220 16458 catalog_manager.cc:5582] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e reported cstate change: term changed from 2 to 3, leader changed from 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) to 1eb10bfe655143db90d05241378bac9e (127.12.45.1). New cstate: current_term: 3 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250811 02:04:05.289306 16745 raft_consensus.cc:1273] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 3 index: 13. (index mismatch)
I20250811 02:04:05.291698 16813 consensus_queue.cc:1035] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.001s
W20250811 02:04:05.343001 16534 consensus_peers.cc:489] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e -> Peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Couldn't send request to peer 9265cb3403ac47649cd338059475e08d. Status: Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 02:04:05.405879 16600 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 3.13, Last appended by leader: 12, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } }
I20250811 02:04:05.414263 16744 raft_consensus.cc:1273] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 3 index: 13. Preceding OpId from leader: term: 3 index: 14. (index mismatch)
I20250811 02:04:05.416460 16813 consensus_queue.cc:1035] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.001s
I20250811 02:04:05.426004 16808 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 3 LEADER]: Committing config change with OpId 3.14: config changed from index 12 to 14, VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) evicted. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } }
I20250811 02:04:05.433960 16745 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Committing config change with OpId 3.14: config changed from index 12 to 14, VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) evicted. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } }
I20250811 02:04:05.443707 16443 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet c646bf4f65cc45208f9880e776286dc1 with cas_config_opid_index 12: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 02:04:05.450737 16458 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index 12 to 14, VOTER 9265cb3403ac47649cd338059475e08d (127.12.45.3) evicted. New cstate: current_term: 3 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250811 02:04:05.470194 16458 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet c646bf4f65cc45208f9880e776286dc1 on TS 9265cb3403ac47649cd338059475e08d: Not found: failed to reset TS proxy: Could not find TS for UUID 9265cb3403ac47649cd338059475e08d
I20250811 02:04:05.475638 16600 consensus_queue.cc:237] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 3.14, Last appended by leader: 12, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } }
I20250811 02:04:05.479450 16808 raft_consensus.cc:2953] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e [term 3 LEADER]: Committing config change with OpId 3.15: config changed from index 14 to 15, VOTER 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } }
I20250811 02:04:05.494874 16443 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet c646bf4f65cc45208f9880e776286dc1 with cas_config_opid_index 14: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 02:04:05.502002 16458 catalog_manager.cc:5582] T c646bf4f65cc45208f9880e776286dc1 P 1eb10bfe655143db90d05241378bac9e reported cstate change: config changed from index 14 to 15, VOTER 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2) evicted. New cstate: current_term: 3 leader_uuid: "1eb10bfe655143db90d05241378bac9e" committed_config { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } health_report { overall_health: HEALTHY } } }
W20250811 02:04:05.526057 16443 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet c646bf4f65cc45208f9880e776286dc1 on TS 9265cb3403ac47649cd338059475e08d failed: Not found: failed to reset TS proxy: Could not find TS for UUID 9265cb3403ac47649cd338059475e08d
I20250811 02:04:05.546638 16725 tablet_service.cc:1515] Processing DeleteTablet for tablet c646bf4f65cc45208f9880e776286dc1 with delete_type TABLET_DATA_TOMBSTONED (TS 91bc21b8f774428bae1e2365ab7e1f37 not found in new config with opid_index 15) from {username='slave'} at 127.0.0.1:46720
I20250811 02:04:05.552726 16827 tablet_replica.cc:331] stopping tablet replica
I20250811 02:04:05.555826 16827 raft_consensus.cc:2241] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250811 02:04:05.556708 16827 raft_consensus.cc:2270] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250811 02:04:05.562132 16827 ts_tablet_manager.cc:1905] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 02:04:05.584401 16827 ts_tablet_manager.cc:1918] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.14
I20250811 02:04:05.585150 16827 log.cc:1199] T c646bf4f65cc45208f9880e776286dc1 P 91bc21b8f774428bae1e2365ab7e1f37: Deleting WAL directory at /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/wals/c646bf4f65cc45208f9880e776286dc1
I20250811 02:04:05.587641 16444 catalog_manager.cc:4928] TS 91bc21b8f774428bae1e2365ab7e1f37 (127.12.45.2:44385): tablet c646bf4f65cc45208f9880e776286dc1 (table TestTable [id=5931c8d0003c4794b1f081c526bedf62]) successfully deleted
I20250811 02:04:05.596602 16745 raft_consensus.cc:1273] T 0b62a4d4eed4485aa1f36bc304d94a53 P 91bc21b8f774428bae1e2365ab7e1f37 [term 3 FOLLOWER]: Refusing update from remote peer 1eb10bfe655143db90d05241378bac9e: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250811 02:04:05.598750 16808 consensus_queue.cc:1035] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e [LEADER]: Connected to new peer: Peer: permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
W20250811 02:04:05.654397 16534 consensus_peers.cc:489] T 0b62a4d4eed4485aa1f36bc304d94a53 P 1eb10bfe655143db90d05241378bac9e -> Peer 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): Couldn't send request to peer 9265cb3403ac47649cd338059475e08d. Status: Network error: Client connection negotiation failed: client connection to 127.12.45.3:36733: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250811 02:04:06.079407 16801 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 16795
W20250811 02:04:06.358615 16801 kernel_stack_watchdog.cc:198] Thread 16795 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:04:06.359004 16795 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.682s user 0.499s sys 1.071s
W20250811 02:04:04.677783 16803 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:06.359464 16795 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.682s user 0.500s sys 1.071s
W20250811 02:04:06.361397 16805 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:06.363710 16804 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1681 milliseconds
I20250811 02:04:06.363794 16795 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:06.365159 16795 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:06.367712 16795 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:06.369184 16795 hybrid_clock.cc:648] HybridClock initialized: now 1754877846369117 us; error 47 us; skew 500 ppm
I20250811 02:04:06.370287 16795 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:06.377635 16795 webserver.cc:489] Webserver started at http://127.12.45.3:40649/ using document root <none> and password file <none>
I20250811 02:04:06.378862 16795 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:06.379156 16795 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:06.389251 16795 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.003s sys 0.002s
I20250811 02:04:06.395084 16833 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:06.396239 16795 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 02:04:06.396616 16795 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "9265cb3403ac47649cd338059475e08d"
format_stamp: "Formatted at 2025-08-11 02:03:33 on dist-test-slave-xn5f"
I20250811 02:04:06.399238 16795 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:06.450619 16795 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:06.452153 16795 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:06.452554 16795 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:06.455320 16795 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:06.461501 16840 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250811 02:04:06.477056 16795 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250811 02:04:06.477283 16795 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.018s user 0.000s sys 0.003s
I20250811 02:04:06.477506 16795 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250811 02:04:06.482604 16840 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:04:06.492354 16795 ts_tablet_manager.cc:610] Registered 3 tablets
I20250811 02:04:06.492622 16795 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s user 0.011s sys 0.003s
I20250811 02:04:06.549084 16840 log.cc:826] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:06.662119 16840 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:06.663187 16840 tablet_bootstrap.cc:492] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Bootstrap complete.
I20250811 02:04:06.665226 16840 ts_tablet_manager.cc:1397] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.183s user 0.161s sys 0.018s
I20250811 02:04:06.679466 16795 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:36733
I20250811 02:04:06.679587 16949 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:36733 every 8 connection(s)
I20250811 02:04:06.682950 16795 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:04:06.683679 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 16795
I20250811 02:04:06.691362 16840 raft_consensus.cc:357] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:06.695374 16840 raft_consensus.cc:738] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: FOLLOWER
I20250811 02:04:06.696362 16840 consensus_queue.cc:260] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } } peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
W20250811 02:04:06.704735 16443 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet c646bf4f65cc45208f9880e776286dc1 on TS 9265cb3403ac47649cd338059475e08d failed: Not found: failed to reset TS proxy: Could not find TS for UUID 9265cb3403ac47649cd338059475e08d
I20250811 02:04:06.707258 16840 ts_tablet_manager.cc:1428] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.042s user 0.029s sys 0.012s
I20250811 02:04:06.707922 16840 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:04:06.726732 16950 heartbeater.cc:344] Connected to a master server at 127.12.45.62:38233
I20250811 02:04:06.727319 16950 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:06.729521 16950 heartbeater.cc:507] Master 127.12.45.62:38233 requested a full tablet report, sending...
I20250811 02:04:06.736068 16457 ts_manager.cc:194] Registered new tserver with Master: 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733)
I20250811 02:04:06.742897 16457 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:58297
I20250811 02:04:06.747457 16950 heartbeater.cc:499] Master 127.12.45.62:38233 was elected leader, sending a full tablet report...
I20250811 02:04:06.748659 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:04:06.754034 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250811 02:04:06.757539 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250811 02:04:06.815793 16840 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:06.816434 16840 tablet_bootstrap.cc:492] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Bootstrap complete.
I20250811 02:04:06.817484 16840 ts_tablet_manager.cc:1397] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.110s user 0.094s sys 0.012s
I20250811 02:04:06.819087 16840 raft_consensus.cc:357] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:06.819501 16840 raft_consensus.cc:738] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: FOLLOWER
I20250811 02:04:06.819975 16840 consensus_queue.cc:260] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "1eb10bfe655143db90d05241378bac9e" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 33479 } } peers { permanent_uuid: "91bc21b8f774428bae1e2365ab7e1f37" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 44385 } attrs { promote: false } } peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } attrs { promote: false } }
I20250811 02:04:06.821197 16840 ts_tablet_manager.cc:1428] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250811 02:04:06.821630 16840 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap starting.
I20250811 02:04:06.907042 16840 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:04:06.907711 16840 tablet_bootstrap.cc:492] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Bootstrap complete.
I20250811 02:04:06.908828 16840 ts_tablet_manager.cc:1397] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Time spent bootstrapping tablet: real 0.087s user 0.077s sys 0.008s
I20250811 02:04:06.910244 16840 raft_consensus.cc:357] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:04:06.910565 16840 raft_consensus.cc:738] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Initialized, Role: FOLLOWER
I20250811 02:04:06.910984 16840 consensus_queue.cc:260] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:04:06.911377 16840 raft_consensus.cc:397] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:04:06.911623 16840 raft_consensus.cc:491] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:04:06.911918 16840 raft_consensus.cc:3058] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Advancing to term 3
I20250811 02:04:06.918869 16840 raft_consensus.cc:513] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:04:06.919806 16840 leader_election.cc:304] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9265cb3403ac47649cd338059475e08d; no voters:
I20250811 02:04:06.920536 16840 leader_election.cc:290] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [CANDIDATE]: Term 3 election: Requested vote from peers
I20250811 02:04:06.920814 16956 raft_consensus.cc:2802] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 3 FOLLOWER]: Leader election won for term 3
I20250811 02:04:06.923839 16840 ts_tablet_manager.cc:1428] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d: Time spent starting tablet: real 0.015s user 0.016s sys 0.000s
I20250811 02:04:06.924715 16956 raft_consensus.cc:695] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [term 3 LEADER]: Becoming Leader. State: Replica: 9265cb3403ac47649cd338059475e08d, State: Running, Role: LEADER
I20250811 02:04:06.925356 16956 consensus_queue.cc:237] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } }
I20250811 02:04:06.935451 16457 catalog_manager.cc:5582] T bf8ce350bb0d4d84a7bd8dd00558a9b8 P 9265cb3403ac47649cd338059475e08d reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "9265cb3403ac47649cd338059475e08d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9265cb3403ac47649cd338059475e08d" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 36733 } health_report { overall_health: HEALTHY } } }
I20250811 02:04:07.115710 16903 raft_consensus.cc:3058] T 0b62a4d4eed4485aa1f36bc304d94a53 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Advancing to term 3
W20250811 02:04:07.762285 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250811 02:04:07.790066 16883 tablet_service.cc:1515] Processing DeleteTablet for tablet c646bf4f65cc45208f9880e776286dc1 with delete_type TABLET_DATA_TOMBSTONED (TS 9265cb3403ac47649cd338059475e08d not found in new config with opid_index 14) from {username='slave'} at 127.0.0.1:41422
I20250811 02:04:07.791960 16975 tablet_replica.cc:331] stopping tablet replica
I20250811 02:04:07.792542 16975 raft_consensus.cc:2241] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Raft consensus shutting down.
I20250811 02:04:07.792902 16975 raft_consensus.cc:2270] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d [term 2 FOLLOWER]: Raft consensus is shut down!
I20250811 02:04:07.795105 16975 ts_tablet_manager.cc:1905] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 02:04:07.804533 16975 ts_tablet_manager.cc:1918] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.12
I20250811 02:04:07.804816 16975 log.cc:1199] T c646bf4f65cc45208f9880e776286dc1 P 9265cb3403ac47649cd338059475e08d: Deleting WAL directory at /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/wals/c646bf4f65cc45208f9880e776286dc1
I20250811 02:04:07.806172 16445 catalog_manager.cc:4928] TS 9265cb3403ac47649cd338059475e08d (127.12.45.3:36733): tablet c646bf4f65cc45208f9880e776286dc1 (table TestTable [id=5931c8d0003c4794b1f081c526bedf62]) successfully deleted
W20250811 02:04:08.766417 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:09.770279 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:10.773967 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:11.777593 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:12.781240 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:13.784868 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:14.788245 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:15.791980 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:16.795773 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:17.800462 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:18.804777 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:19.808441 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:20.812166 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:21.815611 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:22.819005 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:23.822588 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:24.826071 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 02:04:25.829464 12468 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c646bf4f65cc45208f9880e776286dc1: tablet_id: "c646bf4f65cc45208f9880e776286dc1" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250811 02:04:26.833722 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 16495
I20250811 02:04:26.861191 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 16646
I20250811 02:04:26.887519 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 16795
I20250811 02:04:26.912621 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 16425
2025-08-11T02:04:26Z chronyd exiting
I20250811 02:04:26.965306 12468 test_util.cc:183] -----------------------------------------------
I20250811 02:04:26.965524 12468 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754877736375003-12468-0
[ FAILED ] AdminCliTest.TestRebuildTables (60825 ms)
[----------] 5 tests from AdminCliTest (130507 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250811 02:04:26.970108 12468 test_util.cc:276] Using random seed: 1474464736
I20250811 02:04:26.974372 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:04:26.974531 12468 ts_itest-base.cc:116] --------------
I20250811 02:04:26.974699 12468 ts_itest-base.cc:117] 5 tablet servers
I20250811 02:04:26.974836 12468 ts_itest-base.cc:118] 3 replicas per TS
I20250811 02:04:26.974987 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:04:26Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:04:26Z Disabled control of system clock
I20250811 02:04:27.018441 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:45597
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:44311
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:45597
--raft_prepare_replacement_before_eviction=true with env {}
W20250811 02:04:27.320833 16993 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:27.321416 16993 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:27.321808 16993 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:27.352407 16993 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 02:04:27.352757 16993 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:04:27.352962 16993 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:27.353169 16993 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:04:27.353350 16993 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:04:27.388639 16993 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:44311
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:45597
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:45597
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:27.390059 16993 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:27.391752 16993 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:27.402407 16999 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:27.403992 17000 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:28.575284 17002 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:28.577526 17001 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1169 milliseconds
I20250811 02:04:28.577690 16993 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:28.578897 16993 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:28.582166 16993 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:28.583557 16993 hybrid_clock.cc:648] HybridClock initialized: now 1754877868583518 us; error 47 us; skew 500 ppm
I20250811 02:04:28.584403 16993 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:28.591054 16993 webserver.cc:489] Webserver started at http://127.12.45.62:39009/ using document root <none> and password file <none>
I20250811 02:04:28.591995 16993 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:28.592218 16993 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:28.592684 16993 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:28.597033 16993 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "2959095e17914ea5b66f6cb0abaf5378"
format_stamp: "Formatted at 2025-08-11 02:04:28 on dist-test-slave-xn5f"
I20250811 02:04:28.598169 16993 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "2959095e17914ea5b66f6cb0abaf5378"
format_stamp: "Formatted at 2025-08-11 02:04:28 on dist-test-slave-xn5f"
I20250811 02:04:28.605405 16993 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250811 02:04:28.611104 17009 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:28.612150 16993 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250811 02:04:28.612475 16993 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "2959095e17914ea5b66f6cb0abaf5378"
format_stamp: "Formatted at 2025-08-11 02:04:28 on dist-test-slave-xn5f"
I20250811 02:04:28.612802 16993 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:28.665853 16993 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:28.667459 16993 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:28.667861 16993 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:28.743247 16993 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:45597
I20250811 02:04:28.743314 17060 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:45597 every 8 connection(s)
I20250811 02:04:28.745982 16993 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:04:28.748778 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 16993
I20250811 02:04:28.749398 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:04:28.751974 17061 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:28.778278 17061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378: Bootstrap starting.
I20250811 02:04:28.784276 17061 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:28.785969 17061 log.cc:826] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:28.790509 17061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378: No bootstrap required, opened a new log
I20250811 02:04:28.807082 17061 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2959095e17914ea5b66f6cb0abaf5378" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 45597 } }
I20250811 02:04:28.807778 17061 raft_consensus.cc:383] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:28.807972 17061 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2959095e17914ea5b66f6cb0abaf5378, State: Initialized, Role: FOLLOWER
I20250811 02:04:28.808635 17061 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2959095e17914ea5b66f6cb0abaf5378" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 45597 } }
I20250811 02:04:28.809358 17061 raft_consensus.cc:397] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:04:28.809649 17061 raft_consensus.cc:491] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:04:28.809940 17061 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:04:28.814265 17061 raft_consensus.cc:513] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2959095e17914ea5b66f6cb0abaf5378" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 45597 } }
I20250811 02:04:28.814878 17061 leader_election.cc:304] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 2959095e17914ea5b66f6cb0abaf5378; no voters:
I20250811 02:04:28.816617 17061 leader_election.cc:290] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:04:28.817335 17066 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:04:28.819446 17066 raft_consensus.cc:695] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [term 1 LEADER]: Becoming Leader. State: Replica: 2959095e17914ea5b66f6cb0abaf5378, State: Running, Role: LEADER
I20250811 02:04:28.820228 17066 consensus_queue.cc:237] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2959095e17914ea5b66f6cb0abaf5378" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 45597 } }
I20250811 02:04:28.821270 17061 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:04:28.831773 17067 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "2959095e17914ea5b66f6cb0abaf5378" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2959095e17914ea5b66f6cb0abaf5378" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 45597 } } }
I20250811 02:04:28.833493 17067 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [sys.catalog]: This master's current role is: LEADER
I20250811 02:04:28.832700 17068 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 2959095e17914ea5b66f6cb0abaf5378. Latest consensus state: current_term: 1 leader_uuid: "2959095e17914ea5b66f6cb0abaf5378" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2959095e17914ea5b66f6cb0abaf5378" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 45597 } } }
I20250811 02:04:28.836007 17068 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378 [sys.catalog]: This master's current role is: LEADER
I20250811 02:04:28.846261 17075 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:04:28.857919 17075 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:04:28.875469 17075 catalog_manager.cc:1349] Generated new cluster ID: 43568b44afb349338e352f9fb1954982
I20250811 02:04:28.875705 17075 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:04:28.892752 17075 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:04:28.894491 17075 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:04:28.906528 17075 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 2959095e17914ea5b66f6cb0abaf5378: Generated new TSK 0
I20250811 02:04:28.907693 17075 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:04:28.931919 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--builtin_ntp_servers=127.12.45.20:44311
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250811 02:04:29.240450 17085 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:29.240974 17085 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:29.241474 17085 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:29.271865 17085 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 02:04:29.272290 17085 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:29.273066 17085 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:04:29.306738 17085 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:44311
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:29.308241 17085 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:29.309993 17085 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:29.322079 17091 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:30.725747 17090 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 17085
W20250811 02:04:29.323457 17092 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:30.757670 17090 kernel_stack_watchdog.cc:198] Thread 17085 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:04:30.758340 17085 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.435s user 0.421s sys 0.994s
W20250811 02:04:30.758778 17085 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.435s user 0.421s sys 0.994s
W20250811 02:04:30.768894 17093 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1445 milliseconds
W20250811 02:04:30.769750 17094 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:30.769807 17085 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:30.771080 17085 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:30.773751 17085 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:30.775158 17085 hybrid_clock.cc:648] HybridClock initialized: now 1754877870775111 us; error 49 us; skew 500 ppm
I20250811 02:04:30.775949 17085 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:30.782765 17085 webserver.cc:489] Webserver started at http://127.12.45.1:34607/ using document root <none> and password file <none>
I20250811 02:04:30.783782 17085 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:30.783972 17085 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:30.784487 17085 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:30.789155 17085 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "b527e0417f5f4ad78bed22a3e83a9821"
format_stamp: "Formatted at 2025-08-11 02:04:30 on dist-test-slave-xn5f"
I20250811 02:04:30.790302 17085 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "b527e0417f5f4ad78bed22a3e83a9821"
format_stamp: "Formatted at 2025-08-11 02:04:30 on dist-test-slave-xn5f"
I20250811 02:04:30.798096 17085 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.001s
I20250811 02:04:30.804327 17101 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:30.805487 17085 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.000s sys 0.005s
I20250811 02:04:30.805770 17085 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "b527e0417f5f4ad78bed22a3e83a9821"
format_stamp: "Formatted at 2025-08-11 02:04:30 on dist-test-slave-xn5f"
I20250811 02:04:30.806094 17085 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:30.856884 17085 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:30.858381 17085 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:30.858839 17085 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:30.862075 17085 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:30.866278 17085 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:04:30.866478 17085 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:30.866739 17085 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:04:30.866885 17085 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:31.042814 17085 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:43655
I20250811 02:04:31.042973 17213 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:43655 every 8 connection(s)
I20250811 02:04:31.046032 17085 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:04:31.053473 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17085
I20250811 02:04:31.053876 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:04:31.059888 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--builtin_ntp_servers=127.12.45.20:44311
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 02:04:31.081249 17214 heartbeater.cc:344] Connected to a master server at 127.12.45.62:45597
I20250811 02:04:31.081704 17214 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:31.082854 17214 heartbeater.cc:507] Master 127.12.45.62:45597 requested a full tablet report, sending...
I20250811 02:04:31.085423 17026 ts_manager.cc:194] Registered new tserver with Master: b527e0417f5f4ad78bed22a3e83a9821 (127.12.45.1:43655)
I20250811 02:04:31.087515 17026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:54907
W20250811 02:04:31.378423 17218 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:31.378907 17218 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:31.379429 17218 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:31.408152 17218 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 02:04:31.408524 17218 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:31.409245 17218 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:04:31.441043 17218 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:44311
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:31.442427 17218 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:31.444010 17218 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:31.456137 17224 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:32.090996 17214 heartbeater.cc:499] Master 127.12.45.62:45597 was elected leader, sending a full tablet report...
W20250811 02:04:31.456697 17225 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:32.859946 17223 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 17218
W20250811 02:04:33.212549 17223 kernel_stack_watchdog.cc:198] Thread 17218 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:04:33.213047 17218 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.757s user 0.000s sys 0.001s
W20250811 02:04:33.213424 17218 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.757s user 0.000s sys 0.001s
W20250811 02:04:33.216601 17226 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1756 milliseconds
W20250811 02:04:33.218304 17227 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:33.218431 17218 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:33.219792 17218 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:33.222086 17218 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:33.223461 17218 hybrid_clock.cc:648] HybridClock initialized: now 1754877873223419 us; error 50 us; skew 500 ppm
I20250811 02:04:33.224336 17218 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:33.232353 17218 webserver.cc:489] Webserver started at http://127.12.45.2:46827/ using document root <none> and password file <none>
I20250811 02:04:33.233517 17218 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:33.233743 17218 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:33.234258 17218 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:33.239508 17218 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "f676613f3e73471892e10834d199fa10"
format_stamp: "Formatted at 2025-08-11 02:04:33 on dist-test-slave-xn5f"
I20250811 02:04:33.240828 17218 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "f676613f3e73471892e10834d199fa10"
format_stamp: "Formatted at 2025-08-11 02:04:33 on dist-test-slave-xn5f"
I20250811 02:04:33.249755 17218 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.004s
I20250811 02:04:33.256386 17234 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:33.257759 17218 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 02:04:33.258147 17218 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "f676613f3e73471892e10834d199fa10"
format_stamp: "Formatted at 2025-08-11 02:04:33 on dist-test-slave-xn5f"
I20250811 02:04:33.258536 17218 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:33.311223 17218 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:33.312968 17218 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:33.313493 17218 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:33.316375 17218 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:33.321168 17218 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:04:33.321426 17218 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:33.321704 17218 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:04:33.321878 17218 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:33.468137 17218 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:45655
I20250811 02:04:33.468250 17346 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:45655 every 8 connection(s)
I20250811 02:04:33.471237 17218 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 02:04:33.473701 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17218
I20250811 02:04:33.474381 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 02:04:33.484365 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--builtin_ntp_servers=127.12.45.20:44311
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 02:04:33.500044 17347 heartbeater.cc:344] Connected to a master server at 127.12.45.62:45597
I20250811 02:04:33.500715 17347 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:33.502414 17347 heartbeater.cc:507] Master 127.12.45.62:45597 requested a full tablet report, sending...
I20250811 02:04:33.505649 17026 ts_manager.cc:194] Registered new tserver with Master: f676613f3e73471892e10834d199fa10 (127.12.45.2:45655)
I20250811 02:04:33.507290 17026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:33837
W20250811 02:04:33.824257 17351 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:33.824851 17351 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:33.825358 17351 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:33.857709 17351 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 02:04:33.858180 17351 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:33.859035 17351 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:04:33.894717 17351 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:44311
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:33.896565 17351 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:33.898625 17351 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:33.912984 17357 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:34.512514 17347 heartbeater.cc:499] Master 127.12.45.62:45597 was elected leader, sending a full tablet report...
W20250811 02:04:33.913396 17358 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:35.316887 17356 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 17351
W20250811 02:04:35.686791 17351 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.774s user 0.631s sys 1.109s
W20250811 02:04:35.687745 17351 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.776s user 0.631s sys 1.109s
W20250811 02:04:35.687868 17359 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1773 milliseconds
W20250811 02:04:35.689082 17360 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:35.689018 17351 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:35.692610 17351 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:35.694854 17351 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:35.696382 17351 hybrid_clock.cc:648] HybridClock initialized: now 1754877875696346 us; error 52 us; skew 500 ppm
I20250811 02:04:35.697201 17351 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:35.703912 17351 webserver.cc:489] Webserver started at http://127.12.45.3:40493/ using document root <none> and password file <none>
I20250811 02:04:35.705060 17351 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:35.705271 17351 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:35.705714 17351 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:35.710847 17351 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "0f5a994756d549dcb9d910bd6f1b6193"
format_stamp: "Formatted at 2025-08-11 02:04:35 on dist-test-slave-xn5f"
I20250811 02:04:35.712213 17351 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "0f5a994756d549dcb9d910bd6f1b6193"
format_stamp: "Formatted at 2025-08-11 02:04:35 on dist-test-slave-xn5f"
I20250811 02:04:35.720932 17351 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.007s sys 0.001s
I20250811 02:04:35.727623 17367 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:35.729122 17351 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.000s sys 0.004s
I20250811 02:04:35.729489 17351 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "0f5a994756d549dcb9d910bd6f1b6193"
format_stamp: "Formatted at 2025-08-11 02:04:35 on dist-test-slave-xn5f"
I20250811 02:04:35.729861 17351 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:35.786011 17351 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:35.787822 17351 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:35.788301 17351 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:35.791152 17351 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:35.795861 17351 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:04:35.796098 17351 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:35.796417 17351 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:04:35.796581 17351 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:35.941818 17351 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:43955
I20250811 02:04:35.941921 17479 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:43955 every 8 connection(s)
I20250811 02:04:35.944835 17351 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 02:04:35.945930 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17351
I20250811 02:04:35.946589 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 02:04:35.958124 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.4:0
--local_ip_for_outbound_sockets=127.12.45.4
--webserver_interface=127.12.45.4
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--builtin_ntp_servers=127.12.45.20:44311
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 02:04:35.976200 17480 heartbeater.cc:344] Connected to a master server at 127.12.45.62:45597
I20250811 02:04:35.976908 17480 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:35.978686 17480 heartbeater.cc:507] Master 127.12.45.62:45597 requested a full tablet report, sending...
I20250811 02:04:35.982290 17026 ts_manager.cc:194] Registered new tserver with Master: 0f5a994756d549dcb9d910bd6f1b6193 (127.12.45.3:43955)
I20250811 02:04:35.984696 17026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:40687
W20250811 02:04:36.295684 17484 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:36.296176 17484 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:36.296635 17484 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:36.328133 17484 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 02:04:36.328512 17484 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:36.329262 17484 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.4
I20250811 02:04:36.363912 17484 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:44311
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.4:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.12.45.4
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.4
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:36.365309 17484 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:36.367053 17484 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:36.379905 17490 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:36.990566 17480 heartbeater.cc:499] Master 127.12.45.62:45597 was elected leader, sending a full tablet report...
W20250811 02:04:36.380502 17491 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:36.384348 17493 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:37.606401 17492 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1221 milliseconds
I20250811 02:04:37.606519 17484 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:37.607777 17484 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:37.610637 17484 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:37.612073 17484 hybrid_clock.cc:648] HybridClock initialized: now 1754877877612043 us; error 43 us; skew 500 ppm
I20250811 02:04:37.612906 17484 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:37.619086 17484 webserver.cc:489] Webserver started at http://127.12.45.4:36945/ using document root <none> and password file <none>
I20250811 02:04:37.620034 17484 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:37.620262 17484 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:37.620760 17484 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:37.625375 17484 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "7c6ed6029a144aebb90ae1a6f0384a7f"
format_stamp: "Formatted at 2025-08-11 02:04:37 on dist-test-slave-xn5f"
I20250811 02:04:37.626478 17484 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "7c6ed6029a144aebb90ae1a6f0384a7f"
format_stamp: "Formatted at 2025-08-11 02:04:37 on dist-test-slave-xn5f"
I20250811 02:04:37.633648 17484 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.000s
I20250811 02:04:37.639415 17502 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:37.640480 17484 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 02:04:37.640794 17484 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "7c6ed6029a144aebb90ae1a6f0384a7f"
format_stamp: "Formatted at 2025-08-11 02:04:37 on dist-test-slave-xn5f"
I20250811 02:04:37.641116 17484 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:37.687752 17484 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:37.689224 17484 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:37.689656 17484 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:37.692256 17484 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:37.696377 17484 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:04:37.696586 17484 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:37.696820 17484 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:04:37.696979 17484 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:37.838276 17484 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.4:37743
I20250811 02:04:37.838392 17614 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.4:37743 every 8 connection(s)
I20250811 02:04:37.841003 17484 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250811 02:04:37.848956 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17484
I20250811 02:04:37.849740 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250811 02:04:37.858601 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.5:0
--local_ip_for_outbound_sockets=127.12.45.5
--webserver_interface=127.12.45.5
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--builtin_ntp_servers=127.12.45.20:44311
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 02:04:37.868978 17615 heartbeater.cc:344] Connected to a master server at 127.12.45.62:45597
I20250811 02:04:37.869436 17615 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:37.870497 17615 heartbeater.cc:507] Master 127.12.45.62:45597 requested a full tablet report, sending...
I20250811 02:04:37.872891 17026 ts_manager.cc:194] Registered new tserver with Master: 7c6ed6029a144aebb90ae1a6f0384a7f (127.12.45.4:37743)
I20250811 02:04:37.874696 17026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.4:37849
W20250811 02:04:38.172349 17619 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:38.172945 17619 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:38.173465 17619 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:38.204309 17619 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 02:04:38.204705 17619 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:38.205476 17619 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.5
I20250811 02:04:38.239727 17619 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:44311
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.5:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.12.45.5
--webserver_port=0
--tserver_master_addrs=127.12.45.62:45597
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.5
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:38.241182 17619 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:38.242837 17619 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:38.255640 17625 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:38.877986 17615 heartbeater.cc:499] Master 127.12.45.62:45597 was elected leader, sending a full tablet report...
W20250811 02:04:38.256376 17626 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:39.657339 17624 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 17619
W20250811 02:04:39.947109 17619 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.692s user 0.604s sys 1.067s
W20250811 02:04:39.948364 17619 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.693s user 0.604s sys 1.067s
W20250811 02:04:39.949337 17628 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:39.952574 17627 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1691 milliseconds
I20250811 02:04:39.952633 17619 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:39.953826 17619 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:39.956100 17619 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:39.957461 17619 hybrid_clock.cc:648] HybridClock initialized: now 1754877879957433 us; error 33 us; skew 500 ppm
I20250811 02:04:39.958248 17619 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:39.964121 17619 webserver.cc:489] Webserver started at http://127.12.45.5:44013/ using document root <none> and password file <none>
I20250811 02:04:39.965041 17619 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:39.965257 17619 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:39.965699 17619 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:39.970165 17619 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "090eba7a3a544c0297aaaa2613df7428"
format_stamp: "Formatted at 2025-08-11 02:04:39 on dist-test-slave-xn5f"
I20250811 02:04:39.971318 17619 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "090eba7a3a544c0297aaaa2613df7428"
format_stamp: "Formatted at 2025-08-11 02:04:39 on dist-test-slave-xn5f"
I20250811 02:04:39.978282 17619 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.001s sys 0.005s
I20250811 02:04:39.983731 17636 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:39.984818 17619 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 02:04:39.985116 17619 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "090eba7a3a544c0297aaaa2613df7428"
format_stamp: "Formatted at 2025-08-11 02:04:39 on dist-test-slave-xn5f"
I20250811 02:04:39.985414 17619 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:40.034730 17619 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:40.036211 17619 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:40.036638 17619 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:40.039096 17619 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:40.043303 17619 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:04:40.043500 17619 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:40.043751 17619 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:04:40.043895 17619 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:40.442163 17619 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.5:32837
I20250811 02:04:40.442430 17748 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.5:32837 every 8 connection(s)
I20250811 02:04:40.445094 17619 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250811 02:04:40.452530 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17619
I20250811 02:04:40.453090 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250811 02:04:40.485222 17749 heartbeater.cc:344] Connected to a master server at 127.12.45.62:45597
I20250811 02:04:40.485666 17749 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:40.486683 17749 heartbeater.cc:507] Master 127.12.45.62:45597 requested a full tablet report, sending...
I20250811 02:04:40.488768 17026 ts_manager.cc:194] Registered new tserver with Master: 090eba7a3a544c0297aaaa2613df7428 (127.12.45.5:32837)
I20250811 02:04:40.490059 17026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.5:40345
I20250811 02:04:40.494016 12468 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250811 02:04:40.556542 17025 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:57588:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250811 02:04:40.701193 17282 tablet_service.cc:1468] Processing CreateTablet for tablet a6564f88e0ab4761b81fbc95a002de69 (DEFAULT_TABLE table=TestTable [id=59fe7ca7eca740269182fb5b50606ccc]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:04:40.703604 17684 tablet_service.cc:1468] Processing CreateTablet for tablet a6564f88e0ab4761b81fbc95a002de69 (DEFAULT_TABLE table=TestTable [id=59fe7ca7eca740269182fb5b50606ccc]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:04:40.705163 17282 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a6564f88e0ab4761b81fbc95a002de69. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:40.705993 17684 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a6564f88e0ab4761b81fbc95a002de69. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:40.706398 17550 tablet_service.cc:1468] Processing CreateTablet for tablet a6564f88e0ab4761b81fbc95a002de69 (DEFAULT_TABLE table=TestTable [id=59fe7ca7eca740269182fb5b50606ccc]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:04:40.708981 17550 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a6564f88e0ab4761b81fbc95a002de69. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:40.758615 17769 tablet_bootstrap.cc:492] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: Bootstrap starting.
I20250811 02:04:40.762753 17770 tablet_bootstrap.cc:492] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: Bootstrap starting.
I20250811 02:04:40.775949 17769 tablet_bootstrap.cc:654] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:40.775836 17768 tablet_bootstrap.cc:492] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: Bootstrap starting.
I20250811 02:04:40.780807 17770 tablet_bootstrap.cc:654] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:40.781361 17769 log.cc:826] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:40.787156 17770 log.cc:826] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:40.791823 17768 tablet_bootstrap.cc:654] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:40.797209 17768 log.cc:826] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:40.798319 17769 tablet_bootstrap.cc:492] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: No bootstrap required, opened a new log
I20250811 02:04:40.799355 17769 ts_tablet_manager.cc:1397] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: Time spent bootstrapping tablet: real 0.045s user 0.008s sys 0.031s
I20250811 02:04:40.805853 17770 tablet_bootstrap.cc:492] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: No bootstrap required, opened a new log
I20250811 02:04:40.806735 17770 ts_tablet_manager.cc:1397] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: Time spent bootstrapping tablet: real 0.045s user 0.014s sys 0.026s
I20250811 02:04:40.827741 17768 tablet_bootstrap.cc:492] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: No bootstrap required, opened a new log
I20250811 02:04:40.828325 17768 ts_tablet_manager.cc:1397] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: Time spent bootstrapping tablet: real 0.056s user 0.024s sys 0.026s
I20250811 02:04:40.845463 17770 raft_consensus.cc:357] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.847033 17770 raft_consensus.cc:383] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:40.847448 17770 raft_consensus.cc:738] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f676613f3e73471892e10834d199fa10, State: Initialized, Role: FOLLOWER
I20250811 02:04:40.848769 17770 consensus_queue.cc:260] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.856032 17769 raft_consensus.cc:357] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.857606 17768 raft_consensus.cc:357] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.858556 17768 raft_consensus.cc:383] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:40.858561 17769 raft_consensus.cc:383] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:40.858881 17768 raft_consensus.cc:738] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7c6ed6029a144aebb90ae1a6f0384a7f, State: Initialized, Role: FOLLOWER
I20250811 02:04:40.858901 17769 raft_consensus.cc:738] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 090eba7a3a544c0297aaaa2613df7428, State: Initialized, Role: FOLLOWER
I20250811 02:04:40.860075 17768 consensus_queue.cc:260] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.860072 17769 consensus_queue.cc:260] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.860919 17770 ts_tablet_manager.cc:1428] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: Time spent starting tablet: real 0.053s user 0.024s sys 0.023s
I20250811 02:04:40.875744 17768 ts_tablet_manager.cc:1428] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: Time spent starting tablet: real 0.047s user 0.036s sys 0.011s
I20250811 02:04:40.879933 17749 heartbeater.cc:499] Master 127.12.45.62:45597 was elected leader, sending a full tablet report...
I20250811 02:04:40.882635 17769 ts_tablet_manager.cc:1428] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: Time spent starting tablet: real 0.083s user 0.026s sys 0.021s
I20250811 02:04:40.946720 17774 raft_consensus.cc:491] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:04:40.947548 17774 raft_consensus.cc:513] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.950160 17774 leader_election.cc:290] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 7c6ed6029a144aebb90ae1a6f0384a7f (127.12.45.4:37743), 090eba7a3a544c0297aaaa2613df7428 (127.12.45.5:32837)
W20250811 02:04:40.966883 17750 tablet.cc:2378] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:04:40.975219 17570 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a6564f88e0ab4761b81fbc95a002de69" candidate_uuid: "f676613f3e73471892e10834d199fa10" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" is_pre_election: true
I20250811 02:04:40.975936 17704 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a6564f88e0ab4761b81fbc95a002de69" candidate_uuid: "f676613f3e73471892e10834d199fa10" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "090eba7a3a544c0297aaaa2613df7428" is_pre_election: true
I20250811 02:04:40.976749 17570 raft_consensus.cc:2466] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f676613f3e73471892e10834d199fa10 in term 0.
I20250811 02:04:40.977335 17704 raft_consensus.cc:2466] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f676613f3e73471892e10834d199fa10 in term 0.
I20250811 02:04:40.979254 17235 leader_election.cc:304] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 7c6ed6029a144aebb90ae1a6f0384a7f, f676613f3e73471892e10834d199fa10; no voters:
I20250811 02:04:40.980386 17774 raft_consensus.cc:2802] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:04:40.980940 17774 raft_consensus.cc:491] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:04:40.981611 17774 raft_consensus.cc:3058] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 0 FOLLOWER]: Advancing to term 1
W20250811 02:04:40.986172 17348 tablet.cc:2378] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:04:40.992901 17774 raft_consensus.cc:513] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:40.995040 17774 leader_election.cc:290] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [CANDIDATE]: Term 1 election: Requested vote from peers 7c6ed6029a144aebb90ae1a6f0384a7f (127.12.45.4:37743), 090eba7a3a544c0297aaaa2613df7428 (127.12.45.5:32837)
I20250811 02:04:40.996011 17570 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a6564f88e0ab4761b81fbc95a002de69" candidate_uuid: "f676613f3e73471892e10834d199fa10" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f"
I20250811 02:04:40.996587 17570 raft_consensus.cc:3058] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:04:40.996412 17704 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a6564f88e0ab4761b81fbc95a002de69" candidate_uuid: "f676613f3e73471892e10834d199fa10" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "090eba7a3a544c0297aaaa2613df7428"
I20250811 02:04:40.997025 17704 raft_consensus.cc:3058] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:04:41.005327 17570 raft_consensus.cc:2466] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f676613f3e73471892e10834d199fa10 in term 1.
I20250811 02:04:41.005343 17704 raft_consensus.cc:2466] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f676613f3e73471892e10834d199fa10 in term 1.
I20250811 02:04:41.006573 17235 leader_election.cc:304] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 7c6ed6029a144aebb90ae1a6f0384a7f, f676613f3e73471892e10834d199fa10; no voters:
I20250811 02:04:41.007678 17774 raft_consensus.cc:2802] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:04:41.014350 17774 raft_consensus.cc:695] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [term 1 LEADER]: Becoming Leader. State: Replica: f676613f3e73471892e10834d199fa10, State: Running, Role: LEADER
I20250811 02:04:41.015666 17774 consensus_queue.cc:237] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:41.031075 17025 catalog_manager.cc:5582] T a6564f88e0ab4761b81fbc95a002de69 P f676613f3e73471892e10834d199fa10 reported cstate change: term changed from 0 to 1, leader changed from <none> to f676613f3e73471892e10834d199fa10 (127.12.45.2). New cstate: current_term: 1 leader_uuid: "f676613f3e73471892e10834d199fa10" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } health_report { overall_health: UNKNOWN } } }
I20250811 02:04:41.071183 12468 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250811 02:04:41.075645 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f676613f3e73471892e10834d199fa10 to finish bootstrapping
I20250811 02:04:41.093452 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 7c6ed6029a144aebb90ae1a6f0384a7f to finish bootstrapping
W20250811 02:04:41.100034 17616 tablet.cc:2378] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:04:41.107034 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 090eba7a3a544c0297aaaa2613df7428 to finish bootstrapping
I20250811 02:04:41.117686 12468 test_util.cc:276] Using random seed: 1488612316
I20250811 02:04:41.154425 12468 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250811 02:04:41.156173 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17218
W20250811 02:04:41.184497 17787 negotiation.cc:337] Failed RPC negotiation. Trace:
0811 02:04:41.171262 (+ 0us) reactor.cc:625] Submitting negotiation task for client connection to 127.12.45.2:45655 (local address 127.0.0.1:55280)
0811 02:04:41.171863 (+ 601us) negotiation.cc:107] Waiting for socket to connect
0811 02:04:41.171896 (+ 33us) client_negotiation.cc:174] Beginning negotiation
0811 02:04:41.172113 (+ 217us) client_negotiation.cc:252] Sending NEGOTIATE NegotiatePB request
0811 02:04:41.183252 (+ 11139us) negotiation.cc:327] Negotiation complete: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
Metrics: {"client-negotiator.queue_time_us":77}
W20250811 02:04:41.196218 17785 meta_cache.cc:302] tablet a6564f88e0ab4761b81fbc95a002de69: replica f676613f3e73471892e10834d199fa10 (127.12.45.2:45655) has failed: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250811 02:04:41.219558 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.237589 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.258647 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.271138 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.298143 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.311842 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.345515 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.361163 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.401567 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.422042 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.464461 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.488909 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.536006 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.563329 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.620405 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.649923 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.725840 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.762512 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.840922 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:41.876117 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:41.962795 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:42.004180 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:42.093129 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:42.136790 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:42.232736 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:42.281710 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:42.334228 17785 meta_cache.cc:302] tablet a6564f88e0ab4761b81fbc95a002de69: replica f676613f3e73471892e10834d199fa10 (127.12.45.2:45655) has failed: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111)
W20250811 02:04:42.381549 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:42.432087 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
W20250811 02:04:42.539412 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
W20250811 02:04:42.592868 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
I20250811 02:04:42.673341 17800 raft_consensus.cc:491] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:04:42.673930 17800 raft_consensus.cc:513] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:42.676977 17800 leader_election.cc:290] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers f676613f3e73471892e10834d199fa10 (127.12.45.2:45655), 090eba7a3a544c0297aaaa2613df7428 (127.12.45.5:32837)
W20250811 02:04:42.684399 17506 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111)
W20250811 02:04:42.697726 17506 leader_election.cc:336] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer f676613f3e73471892e10834d199fa10 (127.12.45.2:45655): Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111)
I20250811 02:04:42.698480 17704 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a6564f88e0ab4761b81fbc95a002de69" candidate_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "090eba7a3a544c0297aaaa2613df7428" is_pre_election: true
I20250811 02:04:42.699314 17704 raft_consensus.cc:2466] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 7c6ed6029a144aebb90ae1a6f0384a7f in term 1.
I20250811 02:04:42.700943 17506 leader_election.cc:304] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 090eba7a3a544c0297aaaa2613df7428, 7c6ed6029a144aebb90ae1a6f0384a7f; no voters: f676613f3e73471892e10834d199fa10
I20250811 02:04:42.702006 17800 raft_consensus.cc:2802] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250811 02:04:42.702364 17800 raft_consensus.cc:491] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:04:42.702661 17800 raft_consensus.cc:3058] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 1 FOLLOWER]: Advancing to term 2
W20250811 02:04:42.704555 17530 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38172: Illegal state: replica 7c6ed6029a144aebb90ae1a6f0384a7f is not leader of this config: current role FOLLOWER
I20250811 02:04:42.710227 17800 raft_consensus.cc:513] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:42.712318 17800 leader_election.cc:290] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [CANDIDATE]: Term 2 election: Requested vote from peers f676613f3e73471892e10834d199fa10 (127.12.45.2:45655), 090eba7a3a544c0297aaaa2613df7428 (127.12.45.5:32837)
W20250811 02:04:42.719064 17506 leader_election.cc:336] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer f676613f3e73471892e10834d199fa10 (127.12.45.2:45655): Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111)
I20250811 02:04:42.720683 17704 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a6564f88e0ab4761b81fbc95a002de69" candidate_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "090eba7a3a544c0297aaaa2613df7428"
I20250811 02:04:42.721269 17704 raft_consensus.cc:3058] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:04:42.728279 17704 raft_consensus.cc:2466] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 7c6ed6029a144aebb90ae1a6f0384a7f in term 2.
I20250811 02:04:42.729799 17506 leader_election.cc:304] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 090eba7a3a544c0297aaaa2613df7428, 7c6ed6029a144aebb90ae1a6f0384a7f; no voters: f676613f3e73471892e10834d199fa10
I20250811 02:04:42.730834 17800 raft_consensus.cc:2802] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:04:42.732862 17800 raft_consensus.cc:695] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [term 2 LEADER]: Becoming Leader. State: Replica: 7c6ed6029a144aebb90ae1a6f0384a7f, State: Running, Role: LEADER
I20250811 02:04:42.734341 17800 consensus_queue.cc:237] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } }
I20250811 02:04:42.750849 17024 catalog_manager.cc:5582] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f reported cstate change: term changed from 1 to 2, leader changed from f676613f3e73471892e10834d199fa10 (127.12.45.2) to 7c6ed6029a144aebb90ae1a6f0384a7f (127.12.45.4). New cstate: current_term: 2 leader_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f676613f3e73471892e10834d199fa10" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 45655 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "7c6ed6029a144aebb90ae1a6f0384a7f" member_type: VOTER last_known_addr { host: "127.12.45.4" port: 37743 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 } health_report { overall_health: UNKNOWN } } }
W20250811 02:04:42.757175 17664 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:34164: Illegal state: replica 090eba7a3a544c0297aaaa2613df7428 is not leader of this config: current role FOLLOWER
I20250811 02:04:42.825762 17704 raft_consensus.cc:1273] T a6564f88e0ab4761b81fbc95a002de69 P 090eba7a3a544c0297aaaa2613df7428 [term 2 FOLLOWER]: Refusing update from remote peer 7c6ed6029a144aebb90ae1a6f0384a7f: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 02:04:42.827656 17800 consensus_queue.cc:1035] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f [LEADER]: Connected to new peer: Peer: permanent_uuid: "090eba7a3a544c0297aaaa2613df7428" member_type: VOTER last_known_addr { host: "127.12.45.5" port: 32837 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250811 02:04:42.829537 17506 consensus_peers.cc:489] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f -> Peer f676613f3e73471892e10834d199fa10 (127.12.45.2:45655): Couldn't send request to peer f676613f3e73471892e10834d199fa10. Status: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 02:04:42.856709 17810 mvcc.cc:204] Tried to move back new op lower bound from 7187979808037912576 to 7187979807690194944. Current Snapshot: MvccSnapshot[applied={T|T < 7187979808037912576}]
I20250811 02:04:42.857436 17812 mvcc.cc:204] Tried to move back new op lower bound from 7187979808037912576 to 7187979807690194944. Current Snapshot: MvccSnapshot[applied={T|T < 7187979808037912576}]
I20250811 02:04:43.437768 17684 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:04:43.492270 17550 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:04:43.504848 17415 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:04:43.527460 17149 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250811 02:04:45.376775 17506 consensus_peers.cc:489] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f -> Peer f676613f3e73471892e10834d199fa10 (127.12.45.2:45655): Couldn't send request to peer f676613f3e73471892e10834d199fa10. Status: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250811 02:04:45.521242 17684 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:04:45.596313 17415 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:04:45.618855 17550 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:04:45.622833 17149 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250811 02:04:47.652312 17506 consensus_peers.cc:489] T a6564f88e0ab4761b81fbc95a002de69 P 7c6ed6029a144aebb90ae1a6f0384a7f -> Peer f676613f3e73471892e10834d199fa10 (127.12.45.2:45655): Couldn't send request to peer f676613f3e73471892e10834d199fa10. Status: Network error: Client connection negotiation failed: client connection to 127.12.45.2:45655: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20250811 02:04:48.071939 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17085
I20250811 02:04:48.098994 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17351
I20250811 02:04:48.125068 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17484
I20250811 02:04:48.166031 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17619
I20250811 02:04:48.198153 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 16993
2025-08-11T02:04:48Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (21290 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (21290 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250811 02:04:48.260746 12468 test_util.cc:276] Using random seed: 1495755364
I20250811 02:04:48.265059 12468 ts_itest-base.cc:115] Starting cluster with:
I20250811 02:04:48.265218 12468 ts_itest-base.cc:116] --------------
I20250811 02:04:48.265328 12468 ts_itest-base.cc:117] 1 tablet servers
I20250811 02:04:48.265431 12468 ts_itest-base.cc:118] 1 replicas per TS
I20250811 02:04:48.265534 12468 ts_itest-base.cc:119] --------------
2025-08-11T02:04:48Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:04:48Z Disabled control of system clock
I20250811 02:04:48.312294 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:33479
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:45319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:33479 with env {}
W20250811 02:04:48.633037 17903 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:48.633651 17903 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:48.634083 17903 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:48.666647 17903 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:04:48.666989 17903 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:48.667212 17903 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:04:48.667413 17903 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:04:48.704180 17903 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:33479
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:33479
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:48.705607 17903 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:48.707377 17903 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:48.720299 17909 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:50.123615 17908 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 17903
W20250811 02:04:50.526665 17908 kernel_stack_watchdog.cc:198] Thread 17903 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 398ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:04:48.721638 17910 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:50.529102 17911 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1806 milliseconds
W20250811 02:04:50.529402 17903 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.807s user 0.000s sys 0.002s
W20250811 02:04:50.529786 17903 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.807s user 0.000s sys 0.002s
W20250811 02:04:50.534240 17913 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:04:50.534271 17903 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:50.535574 17903 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:50.538684 17903 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:50.540145 17903 hybrid_clock.cc:648] HybridClock initialized: now 1754877890540098 us; error 50 us; skew 500 ppm
I20250811 02:04:50.540969 17903 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:50.547152 17903 webserver.cc:489] Webserver started at http://127.12.45.62:38939/ using document root <none> and password file <none>
I20250811 02:04:50.548197 17903 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:50.548410 17903 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:50.548880 17903 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:50.553421 17903 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "91a88ba0b6c34971b1166869ba99b45c"
format_stamp: "Formatted at 2025-08-11 02:04:50 on dist-test-slave-xn5f"
I20250811 02:04:50.554739 17903 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "91a88ba0b6c34971b1166869ba99b45c"
format_stamp: "Formatted at 2025-08-11 02:04:50 on dist-test-slave-xn5f"
I20250811 02:04:50.562206 17903 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.001s
I20250811 02:04:50.567924 17919 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:50.568964 17903 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 02:04:50.569304 17903 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
uuid: "91a88ba0b6c34971b1166869ba99b45c"
format_stamp: "Formatted at 2025-08-11 02:04:50 on dist-test-slave-xn5f"
I20250811 02:04:50.569648 17903 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:50.619715 17903 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:50.621320 17903 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:50.621789 17903 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:50.696743 17903 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:33479
I20250811 02:04:50.696802 17970 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:33479 every 8 connection(s)
I20250811 02:04:50.699661 17903 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 02:04:50.705168 17971 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:50.710065 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17903
I20250811 02:04:50.710520 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 02:04:50.731781 17971 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c: Bootstrap starting.
I20250811 02:04:50.739142 17971 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:50.740931 17971 log.cc:826] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:50.745662 17971 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c: No bootstrap required, opened a new log
I20250811 02:04:50.765107 17971 raft_consensus.cc:357] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91a88ba0b6c34971b1166869ba99b45c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 33479 } }
I20250811 02:04:50.765834 17971 raft_consensus.cc:383] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:50.766076 17971 raft_consensus.cc:738] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 91a88ba0b6c34971b1166869ba99b45c, State: Initialized, Role: FOLLOWER
I20250811 02:04:50.766737 17971 consensus_queue.cc:260] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91a88ba0b6c34971b1166869ba99b45c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 33479 } }
I20250811 02:04:50.767277 17971 raft_consensus.cc:397] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:04:50.767509 17971 raft_consensus.cc:491] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:04:50.767798 17971 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:04:50.772092 17971 raft_consensus.cc:513] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91a88ba0b6c34971b1166869ba99b45c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 33479 } }
I20250811 02:04:50.773007 17971 leader_election.cc:304] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 91a88ba0b6c34971b1166869ba99b45c; no voters:
I20250811 02:04:50.774961 17971 leader_election.cc:290] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:04:50.775758 17976 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:04:50.778018 17976 raft_consensus.cc:695] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [term 1 LEADER]: Becoming Leader. State: Replica: 91a88ba0b6c34971b1166869ba99b45c, State: Running, Role: LEADER
I20250811 02:04:50.778832 17976 consensus_queue.cc:237] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91a88ba0b6c34971b1166869ba99b45c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 33479 } }
I20250811 02:04:50.779845 17971 sys_catalog.cc:564] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:04:50.788949 17977 sys_catalog.cc:455] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "91a88ba0b6c34971b1166869ba99b45c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91a88ba0b6c34971b1166869ba99b45c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 33479 } } }
I20250811 02:04:50.789887 17977 sys_catalog.cc:458] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [sys.catalog]: This master's current role is: LEADER
I20250811 02:04:50.792662 17978 sys_catalog.cc:455] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 91a88ba0b6c34971b1166869ba99b45c. Latest consensus state: current_term: 1 leader_uuid: "91a88ba0b6c34971b1166869ba99b45c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "91a88ba0b6c34971b1166869ba99b45c" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 33479 } } }
I20250811 02:04:50.793205 17978 sys_catalog.cc:458] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c [sys.catalog]: This master's current role is: LEADER
I20250811 02:04:50.796146 17985 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:04:50.808895 17985 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:04:50.824671 17985 catalog_manager.cc:1349] Generated new cluster ID: 64650a0928a24afd99819c80dc57ac82
I20250811 02:04:50.825026 17985 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:04:50.857793 17985 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:04:50.859454 17985 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:04:50.873277 17985 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 91a88ba0b6c34971b1166869ba99b45c: Generated new TSK 0
I20250811 02:04:50.874269 17985 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:04:50.885677 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:33479
--builtin_ntp_servers=127.12.45.20:45319
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 02:04:51.220888 17995 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:51.221467 17995 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:51.221982 17995 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:51.255810 17995 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:51.256717 17995 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:04:51.293215 17995 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:45319
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:33479
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:51.294832 17995 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:51.296700 17995 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:51.310400 18001 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:52.713320 18000 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 17995
W20250811 02:04:51.312160 18002 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:52.868669 17995 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.557s user 0.546s sys 1.007s
W20250811 02:04:52.870630 17995 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.559s user 0.546s sys 1.008s
W20250811 02:04:52.872156 18004 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:52.874607 18003 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1558 milliseconds
I20250811 02:04:52.874642 17995 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:52.876195 17995 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:52.878975 17995 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:52.880481 17995 hybrid_clock.cc:648] HybridClock initialized: now 1754877892880432 us; error 45 us; skew 500 ppm
I20250811 02:04:52.881609 17995 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:52.889442 17995 webserver.cc:489] Webserver started at http://127.12.45.1:39951/ using document root <none> and password file <none>
I20250811 02:04:52.890450 17995 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:52.890677 17995 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:52.891229 17995 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:52.895924 17995 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "a2ca567471cf40ae85483255eaaee5e3"
format_stamp: "Formatted at 2025-08-11 02:04:52 on dist-test-slave-xn5f"
I20250811 02:04:52.897301 17995 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "a2ca567471cf40ae85483255eaaee5e3"
format_stamp: "Formatted at 2025-08-11 02:04:52 on dist-test-slave-xn5f"
I20250811 02:04:52.905400 17995 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.000s sys 0.008s
I20250811 02:04:52.911736 18011 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:52.913035 17995 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 02:04:52.913362 17995 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "a2ca567471cf40ae85483255eaaee5e3"
format_stamp: "Formatted at 2025-08-11 02:04:52 on dist-test-slave-xn5f"
I20250811 02:04:52.913713 17995 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:52.974593 17995 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:52.976287 17995 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:52.976753 17995 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:52.980050 17995 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:04:52.985420 17995 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:04:52.985646 17995 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:52.985895 17995 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:04:52.986097 17995 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:53.168066 17995 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:46473
I20250811 02:04:53.168210 18123 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:46473 every 8 connection(s)
I20250811 02:04:53.171108 17995 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 02:04:53.178988 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 17995
I20250811 02:04:53.179543 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754877736375003-12468-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 02:04:53.198668 18124 heartbeater.cc:344] Connected to a master server at 127.12.45.62:33479
I20250811 02:04:53.199304 18124 heartbeater.cc:461] Registering TS with master...
I20250811 02:04:53.200830 18124 heartbeater.cc:507] Master 127.12.45.62:33479 requested a full tablet report, sending...
I20250811 02:04:53.203812 17936 ts_manager.cc:194] Registered new tserver with Master: a2ca567471cf40ae85483255eaaee5e3 (127.12.45.1:46473)
I20250811 02:04:53.206017 17936 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:32883
I20250811 02:04:53.213527 12468 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250811 02:04:53.251951 17936 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:42902:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250811 02:04:53.322764 18059 tablet_service.cc:1468] Processing CreateTablet for tablet bf7b20b0c8e2457188f28cc7e0df5b39 (DEFAULT_TABLE table=TestTable [id=ec0d994ab39748668e81fac6aa546abc]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:04:53.324847 18059 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bf7b20b0c8e2457188f28cc7e0df5b39. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:53.346437 18139 tablet_bootstrap.cc:492] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3: Bootstrap starting.
I20250811 02:04:53.354184 18139 tablet_bootstrap.cc:654] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:53.356571 18139 log.cc:826] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:53.362190 18139 tablet_bootstrap.cc:492] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3: No bootstrap required, opened a new log
I20250811 02:04:53.362779 18139 ts_tablet_manager.cc:1397] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3: Time spent bootstrapping tablet: real 0.017s user 0.011s sys 0.005s
I20250811 02:04:53.389264 18139 raft_consensus.cc:357] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a2ca567471cf40ae85483255eaaee5e3" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 46473 } }
I20250811 02:04:53.390095 18139 raft_consensus.cc:383] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:53.390393 18139 raft_consensus.cc:738] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a2ca567471cf40ae85483255eaaee5e3, State: Initialized, Role: FOLLOWER
I20250811 02:04:53.391317 18139 consensus_queue.cc:260] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a2ca567471cf40ae85483255eaaee5e3" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 46473 } }
I20250811 02:04:53.392037 18139 raft_consensus.cc:397] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:04:53.392380 18139 raft_consensus.cc:491] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:04:53.392804 18139 raft_consensus.cc:3058] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:04:53.399621 18139 raft_consensus.cc:513] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a2ca567471cf40ae85483255eaaee5e3" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 46473 } }
I20250811 02:04:53.400696 18139 leader_election.cc:304] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a2ca567471cf40ae85483255eaaee5e3; no voters:
I20250811 02:04:53.402973 18139 leader_election.cc:290] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:04:53.403359 18141 raft_consensus.cc:2802] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:04:53.405859 18141 raft_consensus.cc:695] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [term 1 LEADER]: Becoming Leader. State: Replica: a2ca567471cf40ae85483255eaaee5e3, State: Running, Role: LEADER
I20250811 02:04:53.407063 18124 heartbeater.cc:499] Master 127.12.45.62:33479 was elected leader, sending a full tablet report...
I20250811 02:04:53.406837 18141 consensus_queue.cc:237] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a2ca567471cf40ae85483255eaaee5e3" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 46473 } }
I20250811 02:04:53.410213 18139 ts_tablet_manager.cc:1428] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3: Time spent starting tablet: real 0.047s user 0.039s sys 0.008s
I20250811 02:04:53.420645 17936 catalog_manager.cc:5582] T bf7b20b0c8e2457188f28cc7e0df5b39 P a2ca567471cf40ae85483255eaaee5e3 reported cstate change: term changed from 0 to 1, leader changed from <none> to a2ca567471cf40ae85483255eaaee5e3 (127.12.45.1). New cstate: current_term: 1 leader_uuid: "a2ca567471cf40ae85483255eaaee5e3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a2ca567471cf40ae85483255eaaee5e3" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 46473 } health_report { overall_health: HEALTHY } } }
I20250811 02:04:53.480803 12468 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250811 02:04:53.484148 12468 ts_itest-base.cc:246] Waiting for 1 tablets on tserver a2ca567471cf40ae85483255eaaee5e3 to finish bootstrapping
I20250811 02:04:56.173451 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17995
I20250811 02:04:56.198776 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 17903
2025-08-11T02:04:56Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (7992 ms)
[----------] 1 test from ListTableCliSimpleParamTest (7992 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250811 02:04:56.253690 12468 test_util.cc:276] Using random seed: 1503748316
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (12 ms)
[----------] 1 test from ListTableCliParamTest (13 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-08-11T02:04:56Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T02:04:56Z Disabled control of system clock
I20250811 02:04:56.307642 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:41513
--webserver_interface=127.12.45.62
--webserver_port=0
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:41513 with env {}
W20250811 02:04:56.615298 18167 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:56.615900 18167 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:56.616333 18167 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:56.647989 18167 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:04:56.648288 18167 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:56.648504 18167 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:04:56.648700 18167 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:04:56.685029 18167 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:41513
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:41513
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:56.686450 18167 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:56.688150 18167 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:56.700237 18173 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:56.700589 18174 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:57.921605 18176 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:04:57.924381 18175 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1219 milliseconds
I20250811 02:04:57.924505 18167 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:04:57.925789 18167 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:04:57.928485 18167 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:04:57.929821 18167 hybrid_clock.cc:648] HybridClock initialized: now 1754877897929777 us; error 42 us; skew 500 ppm
I20250811 02:04:57.930652 18167 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:04:57.936836 18167 webserver.cc:489] Webserver started at http://127.12.45.62:33011/ using document root <none> and password file <none>
I20250811 02:04:57.937731 18167 fs_manager.cc:362] Metadata directory not provided
I20250811 02:04:57.937943 18167 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:04:57.938400 18167 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:04:57.944060 18167 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/instance:
uuid: "79e9d08f804c47f9ac6454658f305d3b"
format_stamp: "Formatted at 2025-08-11 02:04:57 on dist-test-slave-xn5f"
I20250811 02:04:57.945148 18167 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal/instance:
uuid: "79e9d08f804c47f9ac6454658f305d3b"
format_stamp: "Formatted at 2025-08-11 02:04:57 on dist-test-slave-xn5f"
I20250811 02:04:57.952229 18167 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250811 02:04:57.957746 18184 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:04:57.958784 18167 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250811 02:04:57.959228 18167 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
uuid: "79e9d08f804c47f9ac6454658f305d3b"
format_stamp: "Formatted at 2025-08-11 02:04:57 on dist-test-slave-xn5f"
I20250811 02:04:57.959677 18167 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:04:58.010375 18167 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:04:58.011941 18167 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:04:58.012501 18167 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:04:58.084877 18167 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:41513
I20250811 02:04:58.084947 18235 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:41513 every 8 connection(s)
I20250811 02:04:58.087775 18167 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb
I20250811 02:04:58.093089 18236 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:04:58.098093 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 18167
I20250811 02:04:58.098886 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal/instance
I20250811 02:04:58.118831 18236 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b: Bootstrap starting.
I20250811 02:04:58.124454 18236 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b: Neither blocks nor log segments found. Creating new log.
I20250811 02:04:58.126566 18236 log.cc:826] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b: Log is configured to *not* fsync() on all Append() calls
I20250811 02:04:58.131457 18236 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b: No bootstrap required, opened a new log
I20250811 02:04:58.149135 18236 raft_consensus.cc:357] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "79e9d08f804c47f9ac6454658f305d3b" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:04:58.149776 18236 raft_consensus.cc:383] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:04:58.150009 18236 raft_consensus.cc:738] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 79e9d08f804c47f9ac6454658f305d3b, State: Initialized, Role: FOLLOWER
I20250811 02:04:58.150624 18236 consensus_queue.cc:260] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "79e9d08f804c47f9ac6454658f305d3b" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:04:58.151240 18236 raft_consensus.cc:397] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:04:58.151494 18236 raft_consensus.cc:491] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:04:58.151764 18236 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:04:58.156332 18236 raft_consensus.cc:513] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "79e9d08f804c47f9ac6454658f305d3b" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:04:58.157078 18236 leader_election.cc:304] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 79e9d08f804c47f9ac6454658f305d3b; no voters:
I20250811 02:04:58.158692 18236 leader_election.cc:290] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:04:58.159449 18241 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:04:58.161434 18241 raft_consensus.cc:695] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [term 1 LEADER]: Becoming Leader. State: Replica: 79e9d08f804c47f9ac6454658f305d3b, State: Running, Role: LEADER
I20250811 02:04:58.162287 18241 consensus_queue.cc:237] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "79e9d08f804c47f9ac6454658f305d3b" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:04:58.163403 18236 sys_catalog.cc:564] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:04:58.174170 18242 sys_catalog.cc:455] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "79e9d08f804c47f9ac6454658f305d3b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "79e9d08f804c47f9ac6454658f305d3b" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } } }
I20250811 02:04:58.175128 18242 sys_catalog.cc:458] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [sys.catalog]: This master's current role is: LEADER
I20250811 02:04:58.174896 18243 sys_catalog.cc:455] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [sys.catalog]: SysCatalogTable state changed. Reason: New leader 79e9d08f804c47f9ac6454658f305d3b. Latest consensus state: current_term: 1 leader_uuid: "79e9d08f804c47f9ac6454658f305d3b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "79e9d08f804c47f9ac6454658f305d3b" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } } }
I20250811 02:04:58.175793 18243 sys_catalog.cc:458] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b [sys.catalog]: This master's current role is: LEADER
I20250811 02:04:58.180761 18250 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:04:58.194558 18250 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:04:58.210837 18250 catalog_manager.cc:1349] Generated new cluster ID: 4d0dad3b49ba4f3ab09e79399a5e685b
I20250811 02:04:58.211177 18250 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:04:58.236996 18250 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:04:58.238806 18250 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:04:58.252940 18250 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 79e9d08f804c47f9ac6454658f305d3b: Generated new TSK 0
I20250811 02:04:58.253811 18250 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 02:04:58.275220 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:0
--local_ip_for_outbound_sockets=127.12.45.1
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:41513
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250811 02:04:58.584543 18260 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:04:58.585057 18260 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:04:58.585564 18260 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:04:58.616748 18260 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:04:58.617625 18260 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:04:58.653628 18260 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=0
--tserver_master_addrs=127.12.45.62:41513
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:04:58.655050 18260 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:04:58.656674 18260 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:04:58.669312 18266 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:00.072537 18265 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 18260
W20250811 02:05:00.135097 18265 kernel_stack_watchdog.cc:198] Thread 18260 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:04:58.670609 18267 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:00.137548 18260 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.467s user 0.496s sys 0.938s
W20250811 02:05:00.138120 18260 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.467s user 0.496s sys 0.939s
W20250811 02:05:00.138625 18269 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:00.142306 18268 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1470 milliseconds
I20250811 02:05:00.142329 18260 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:05:00.143666 18260 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:00.145848 18260 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:00.147204 18260 hybrid_clock.cc:648] HybridClock initialized: now 1754877900147162 us; error 47 us; skew 500 ppm
I20250811 02:05:00.147984 18260 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:00.154162 18260 webserver.cc:489] Webserver started at http://127.12.45.1:38277/ using document root <none> and password file <none>
I20250811 02:05:00.155215 18260 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:00.155448 18260 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:00.155896 18260 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:05:00.160296 18260 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/instance:
uuid: "4ffc0978d8024920b9bfc456f8de19c4"
format_stamp: "Formatted at 2025-08-11 02:05:00 on dist-test-slave-xn5f"
I20250811 02:05:00.161396 18260 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal/instance:
uuid: "4ffc0978d8024920b9bfc456f8de19c4"
format_stamp: "Formatted at 2025-08-11 02:05:00 on dist-test-slave-xn5f"
I20250811 02:05:00.169270 18260 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.001s
I20250811 02:05:00.175172 18276 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:00.176337 18260 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250811 02:05:00.176652 18260 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
uuid: "4ffc0978d8024920b9bfc456f8de19c4"
format_stamp: "Formatted at 2025-08-11 02:05:00 on dist-test-slave-xn5f"
I20250811 02:05:00.176981 18260 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:00.229555 18260 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:00.231106 18260 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:00.231546 18260 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:00.234172 18260 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:05:00.238493 18260 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:05:00.238706 18260 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:00.238991 18260 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:05:00.239177 18260 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:00.414196 18260 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:40133
I20250811 02:05:00.414386 18388 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:40133 every 8 connection(s)
I20250811 02:05:00.416843 18260 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb
I20250811 02:05:00.420893 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 18260
I20250811 02:05:00.421434 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal/instance
I20250811 02:05:00.430713 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:0
--local_ip_for_outbound_sockets=127.12.45.2
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:41513
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 02:05:00.457732 18389 heartbeater.cc:344] Connected to a master server at 127.12.45.62:41513
I20250811 02:05:00.458302 18389 heartbeater.cc:461] Registering TS with master...
I20250811 02:05:00.459738 18389 heartbeater.cc:507] Master 127.12.45.62:41513 requested a full tablet report, sending...
I20250811 02:05:00.463248 18201 ts_manager.cc:194] Registered new tserver with Master: 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133)
I20250811 02:05:00.466372 18201 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:48641
W20250811 02:05:00.756606 18393 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:00.757162 18393 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:05:00.757673 18393 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:00.790485 18393 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:05:00.791378 18393 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:05:00.826023 18393 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=0
--tserver_master_addrs=127.12.45.62:41513
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:05:00.827489 18393 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:00.829160 18393 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:00.841346 18399 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:01.470795 18389 heartbeater.cc:499] Master 127.12.45.62:41513 was elected leader, sending a full tablet report...
W20250811 02:05:00.841760 18400 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:02.044914 18393 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.203s user 0.392s sys 0.809s
W20250811 02:05:02.045413 18393 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.204s user 0.393s sys 0.810s
W20250811 02:05:02.046056 18402 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:02.047746 18401 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1204 milliseconds
I20250811 02:05:02.047835 18393 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:05:02.049413 18393 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:02.052196 18393 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:02.053697 18393 hybrid_clock.cc:648] HybridClock initialized: now 1754877902053658 us; error 26 us; skew 500 ppm
I20250811 02:05:02.054900 18393 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:02.064137 18393 webserver.cc:489] Webserver started at http://127.12.45.2:44933/ using document root <none> and password file <none>
I20250811 02:05:02.065546 18393 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:02.065820 18393 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:02.066531 18393 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:05:02.073678 18393 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/instance:
uuid: "72051acd86da47b79688091dcfdec9e1"
format_stamp: "Formatted at 2025-08-11 02:05:02 on dist-test-slave-xn5f"
I20250811 02:05:02.075284 18393 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal/instance:
uuid: "72051acd86da47b79688091dcfdec9e1"
format_stamp: "Formatted at 2025-08-11 02:05:02 on dist-test-slave-xn5f"
I20250811 02:05:02.085963 18393 fs_manager.cc:696] Time spent creating directory manager: real 0.010s user 0.005s sys 0.005s
I20250811 02:05:02.094316 18409 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:02.095887 18393 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.006s sys 0.000s
I20250811 02:05:02.096300 18393 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
uuid: "72051acd86da47b79688091dcfdec9e1"
format_stamp: "Formatted at 2025-08-11 02:05:02 on dist-test-slave-xn5f"
I20250811 02:05:02.096788 18393 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:02.168207 18393 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:02.169636 18393 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:02.170048 18393 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:02.172506 18393 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:05:02.176510 18393 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:05:02.176703 18393 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:02.176961 18393 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:05:02.177100 18393 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:02.307379 18393 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:39469
I20250811 02:05:02.307497 18521 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:39469 every 8 connection(s)
I20250811 02:05:02.309937 18393 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb
I20250811 02:05:02.317967 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 18393
I20250811 02:05:02.318389 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal/instance
I20250811 02:05:02.325327 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:0
--local_ip_for_outbound_sockets=127.12.45.3
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:41513
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 02:05:02.330919 18522 heartbeater.cc:344] Connected to a master server at 127.12.45.62:41513
I20250811 02:05:02.331367 18522 heartbeater.cc:461] Registering TS with master...
I20250811 02:05:02.332353 18522 heartbeater.cc:507] Master 127.12.45.62:41513 requested a full tablet report, sending...
I20250811 02:05:02.334602 18201 ts_manager.cc:194] Registered new tserver with Master: 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469)
I20250811 02:05:02.335897 18201 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:55255
W20250811 02:05:02.625185 18526 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:02.625701 18526 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:05:02.626209 18526 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:02.656785 18526 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:05:02.657630 18526 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:05:02.690915 18526 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=0
--tserver_master_addrs=127.12.45.62:41513
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:05:02.692330 18526 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:02.693922 18526 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:02.705307 18532 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:03.339459 18522 heartbeater.cc:499] Master 127.12.45.62:41513 was elected leader, sending a full tablet report...
W20250811 02:05:02.706223 18533 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:03.927112 18535 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:03.928375 18534 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1218 milliseconds
W20250811 02:05:03.928726 18526 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.223s user 0.386s sys 0.835s
W20250811 02:05:03.929008 18526 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.223s user 0.386s sys 0.835s
I20250811 02:05:03.929230 18526 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:05:03.930286 18526 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:03.932561 18526 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:03.933928 18526 hybrid_clock.cc:648] HybridClock initialized: now 1754877903933877 us; error 61 us; skew 500 ppm
I20250811 02:05:03.934736 18526 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:03.942596 18526 webserver.cc:489] Webserver started at http://127.12.45.3:37237/ using document root <none> and password file <none>
I20250811 02:05:03.943653 18526 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:03.943882 18526 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:03.944355 18526 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:05:03.949149 18526 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/instance:
uuid: "f3b2fba7dba94bab909e0f263b9edf6b"
format_stamp: "Formatted at 2025-08-11 02:05:03 on dist-test-slave-xn5f"
I20250811 02:05:03.950474 18526 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal/instance:
uuid: "f3b2fba7dba94bab909e0f263b9edf6b"
format_stamp: "Formatted at 2025-08-11 02:05:03 on dist-test-slave-xn5f"
I20250811 02:05:03.958890 18526 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.001s
I20250811 02:05:03.965158 18542 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:03.966459 18526 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.000s
I20250811 02:05:03.966797 18526 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
uuid: "f3b2fba7dba94bab909e0f263b9edf6b"
format_stamp: "Formatted at 2025-08-11 02:05:03 on dist-test-slave-xn5f"
I20250811 02:05:03.967219 18526 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:04.045532 18526 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:04.047091 18526 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:04.047519 18526 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:04.050014 18526 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:05:04.054236 18526 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 02:05:04.054456 18526 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:04.054700 18526 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 02:05:04.054862 18526 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:04.196789 18526 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:43595
I20250811 02:05:04.196894 18654 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:43595 every 8 connection(s)
I20250811 02:05:04.199415 18526 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb
I20250811 02:05:04.204938 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 18526
I20250811 02:05:04.205456 12468 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal/instance
I20250811 02:05:04.226431 18655 heartbeater.cc:344] Connected to a master server at 127.12.45.62:41513
I20250811 02:05:04.226828 18655 heartbeater.cc:461] Registering TS with master...
I20250811 02:05:04.227842 18655 heartbeater.cc:507] Master 127.12.45.62:41513 requested a full tablet report, sending...
I20250811 02:05:04.229862 18201 ts_manager.cc:194] Registered new tserver with Master: f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595)
I20250811 02:05:04.231161 18201 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:60417
I20250811 02:05:04.238893 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:05:04.267062 12468 test_util.cc:276] Using random seed: 1511761695
I20250811 02:05:04.306375 18201 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:40570:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250811 02:05:04.310161 18201 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 02:05:04.371084 18457 tablet_service.cc:1468] Processing CreateTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 (DEFAULT_TABLE table=pre_rebuild [id=babd675f6cb74cbe85d826d9748d3319]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:05:04.373404 18457 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 99f93a0890d7435c9e6d36afcd715e57. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:04.374228 18590 tablet_service.cc:1468] Processing CreateTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 (DEFAULT_TABLE table=pre_rebuild [id=babd675f6cb74cbe85d826d9748d3319]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:05:04.374856 18324 tablet_service.cc:1468] Processing CreateTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 (DEFAULT_TABLE table=pre_rebuild [id=babd675f6cb74cbe85d826d9748d3319]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:05:04.376325 18590 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 99f93a0890d7435c9e6d36afcd715e57. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:04.376786 18324 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 99f93a0890d7435c9e6d36afcd715e57. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:04.394431 18679 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Bootstrap starting.
I20250811 02:05:04.400189 18679 tablet_bootstrap.cc:654] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:04.401882 18679 log.cc:826] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Log is configured to *not* fsync() on all Append() calls
I20250811 02:05:04.407137 18680 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Bootstrap starting.
I20250811 02:05:04.408756 18679 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: No bootstrap required, opened a new log
I20250811 02:05:04.409261 18679 ts_tablet_manager.cc:1397] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Time spent bootstrapping tablet: real 0.016s user 0.005s sys 0.007s
I20250811 02:05:04.409679 18681 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Bootstrap starting.
I20250811 02:05:04.415038 18680 tablet_bootstrap.cc:654] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:04.417601 18680 log.cc:826] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Log is configured to *not* fsync() on all Append() calls
I20250811 02:05:04.418437 18681 tablet_bootstrap.cc:654] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:04.421007 18681 log.cc:826] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Log is configured to *not* fsync() on all Append() calls
I20250811 02:05:04.437155 18680 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: No bootstrap required, opened a new log
I20250811 02:05:04.437907 18680 ts_tablet_manager.cc:1397] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Time spent bootstrapping tablet: real 0.031s user 0.010s sys 0.019s
I20250811 02:05:04.445541 18679 raft_consensus.cc:357] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.446689 18679 raft_consensus.cc:383] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:04.447067 18681 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: No bootstrap required, opened a new log
I20250811 02:05:04.447088 18679 raft_consensus.cc:738] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 72051acd86da47b79688091dcfdec9e1, State: Initialized, Role: FOLLOWER
I20250811 02:05:04.447715 18681 ts_tablet_manager.cc:1397] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Time spent bootstrapping tablet: real 0.038s user 0.017s sys 0.018s
I20250811 02:05:04.448172 18679 consensus_queue.cc:260] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.460071 18679 ts_tablet_manager.cc:1428] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Time spent starting tablet: real 0.051s user 0.029s sys 0.013s
I20250811 02:05:04.465489 18680 raft_consensus.cc:357] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.466228 18680 raft_consensus.cc:383] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:04.466490 18680 raft_consensus.cc:738] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f3b2fba7dba94bab909e0f263b9edf6b, State: Initialized, Role: FOLLOWER
I20250811 02:05:04.467376 18680 consensus_queue.cc:260] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.470527 18655 heartbeater.cc:499] Master 127.12.45.62:41513 was elected leader, sending a full tablet report...
I20250811 02:05:04.471263 18680 ts_tablet_manager.cc:1428] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Time spent starting tablet: real 0.033s user 0.027s sys 0.005s
I20250811 02:05:04.472862 18681 raft_consensus.cc:357] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.473644 18681 raft_consensus.cc:383] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:04.473951 18681 raft_consensus.cc:738] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4ffc0978d8024920b9bfc456f8de19c4, State: Initialized, Role: FOLLOWER
I20250811 02:05:04.474864 18681 consensus_queue.cc:260] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.478996 18681 ts_tablet_manager.cc:1428] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Time spent starting tablet: real 0.031s user 0.026s sys 0.005s
I20250811 02:05:04.493515 18685 raft_consensus.cc:491] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:05:04.493984 18685 raft_consensus.cc:513] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.496351 18685 leader_election.cc:290] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133), f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595)
I20250811 02:05:04.508647 18344 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "72051acd86da47b79688091dcfdec9e1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ffc0978d8024920b9bfc456f8de19c4" is_pre_election: true
I20250811 02:05:04.509526 18344 raft_consensus.cc:2466] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 72051acd86da47b79688091dcfdec9e1 in term 0.
I20250811 02:05:04.509589 18610 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "72051acd86da47b79688091dcfdec9e1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" is_pre_election: true
I20250811 02:05:04.510311 18610 raft_consensus.cc:2466] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 72051acd86da47b79688091dcfdec9e1 in term 0.
I20250811 02:05:04.510644 18413 leader_election.cc:304] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4ffc0978d8024920b9bfc456f8de19c4, 72051acd86da47b79688091dcfdec9e1; no voters:
I20250811 02:05:04.511471 18685 raft_consensus.cc:2802] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:05:04.511729 18685 raft_consensus.cc:491] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:05:04.512022 18685 raft_consensus.cc:3058] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:04.517047 18685 raft_consensus.cc:513] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.518379 18685 leader_election.cc:290] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [CANDIDATE]: Term 1 election: Requested vote from peers 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133), f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595)
I20250811 02:05:04.519122 18344 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "72051acd86da47b79688091dcfdec9e1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ffc0978d8024920b9bfc456f8de19c4"
I20250811 02:05:04.519297 18610 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "72051acd86da47b79688091dcfdec9e1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f3b2fba7dba94bab909e0f263b9edf6b"
I20250811 02:05:04.519629 18344 raft_consensus.cc:3058] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:04.519687 18610 raft_consensus.cc:3058] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:04.523903 18344 raft_consensus.cc:2466] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 72051acd86da47b79688091dcfdec9e1 in term 1.
I20250811 02:05:04.524034 18610 raft_consensus.cc:2466] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 72051acd86da47b79688091dcfdec9e1 in term 1.
I20250811 02:05:04.524783 18413 leader_election.cc:304] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4ffc0978d8024920b9bfc456f8de19c4, 72051acd86da47b79688091dcfdec9e1; no voters:
I20250811 02:05:04.525395 18685 raft_consensus.cc:2802] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:05:04.526870 18685 raft_consensus.cc:695] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 LEADER]: Becoming Leader. State: Replica: 72051acd86da47b79688091dcfdec9e1, State: Running, Role: LEADER
I20250811 02:05:04.527665 18685 consensus_queue.cc:237] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:04.536737 18201 catalog_manager.cc:5582] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 reported cstate change: term changed from 0 to 1, leader changed from <none> to 72051acd86da47b79688091dcfdec9e1 (127.12.45.2). New cstate: current_term: 1 leader_uuid: "72051acd86da47b79688091dcfdec9e1" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } health_report { overall_health: UNKNOWN } } }
W20250811 02:05:04.566609 18523 tablet.cc:2378] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 02:05:04.676649 18390 tablet.cc:2378] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:05:04.700664 18610 raft_consensus.cc:1273] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Refusing update from remote peer 72051acd86da47b79688091dcfdec9e1: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 02:05:04.701149 18344 raft_consensus.cc:1273] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Refusing update from remote peer 72051acd86da47b79688091dcfdec9e1: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 02:05:04.702222 18690 consensus_queue.cc:1035] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250811 02:05:04.702845 18685 consensus_queue.cc:1035] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250811 02:05:04.704228 18656 tablet.cc:2378] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:05:04.734787 18699 mvcc.cc:204] Tried to move back new op lower bound from 7187979897637048320 to 7187979896952377344. Current Snapshot: MvccSnapshot[applied={T|T < 7187979897637048320}]
I20250811 02:05:04.737876 18697 mvcc.cc:204] Tried to move back new op lower bound from 7187979897637048320 to 7187979896952377344. Current Snapshot: MvccSnapshot[applied={T|T < 7187979897637048320}]
I20250811 02:05:04.745143 18698 mvcc.cc:204] Tried to move back new op lower bound from 7187979897637048320 to 7187979896952377344. Current Snapshot: MvccSnapshot[applied={T|T < 7187979897637048320}]
W20250811 02:05:08.860064 18385 debug-util.cc:398] Leaking SignalData structure 0x7b08000acee0 after lost signal to thread 18261
W20250811 02:05:08.861476 18385 debug-util.cc:398] Leaking SignalData structure 0x7b08000cbee0 after lost signal to thread 18388
I20250811 02:05:10.252256 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 18167
W20250811 02:05:10.696316 18731 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:10.696990 18731 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:10.735561 18731 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 02:05:10.801833 18655 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:41513 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:41513: connect: Connection refused (error 111)
W20250811 02:05:10.819921 18522 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:41513 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:41513: connect: Connection refused (error 111)
W20250811 02:05:10.864600 18389 heartbeater.cc:646] Failed to heartbeat to 127.12.45.62:41513 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.12.45.62:41513: connect: Connection refused (error 111)
W20250811 02:05:12.192817 18737 debug-util.cc:398] Leaking SignalData structure 0x7b08000373e0 after lost signal to thread 18731
W20250811 02:05:12.193459 18737 kernel_stack_watchdog.cc:198] Thread 18731 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 402ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:05:12.301752 18731 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.514s user 0.521s sys 0.865s
W20250811 02:05:12.456094 18731 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.669s user 0.529s sys 0.873s
I20250811 02:05:12.569248 18731 minidump.cc:252] Setting minidump size limit to 20M
I20250811 02:05:12.571990 18731 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:12.573500 18731 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:12.587371 18764 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:12.587538 18765 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:12.591101 18731 server_base.cc:1047] running on GCE node
W20250811 02:05:12.591715 18768 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:12.592736 18731 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250811 02:05:12.593225 18731 hybrid_clock.cc:648] HybridClock initialized: now 1754877912593181 us; error 136543 us; skew 500 ppm
I20250811 02:05:12.593912 18731 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:12.598631 18731 webserver.cc:489] Webserver started at http://0.0.0.0:46069/ using document root <none> and password file <none>
I20250811 02:05:12.599597 18731 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:12.599836 18731 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:12.600291 18731 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 02:05:12.604655 18731 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/instance:
uuid: "7d111deb3d0a4bab93b13f193855cef5"
format_stamp: "Formatted at 2025-08-11 02:05:12 on dist-test-slave-xn5f"
I20250811 02:05:12.605785 18731 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal/instance:
uuid: "7d111deb3d0a4bab93b13f193855cef5"
format_stamp: "Formatted at 2025-08-11 02:05:12 on dist-test-slave-xn5f"
I20250811 02:05:12.612121 18731 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.002s
I20250811 02:05:12.617106 18772 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:12.618062 18731 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250811 02:05:12.618376 18731 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
uuid: "7d111deb3d0a4bab93b13f193855cef5"
format_stamp: "Formatted at 2025-08-11 02:05:12 on dist-test-slave-xn5f"
I20250811 02:05:12.618700 18731 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:12.874914 18731 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:12.876482 18731 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:12.876930 18731 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:12.882350 18731 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:12.899171 18731 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Bootstrap starting.
I20250811 02:05:12.904346 18731 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:12.906127 18731 log.cc:826] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Log is configured to *not* fsync() on all Append() calls
I20250811 02:05:12.910413 18731 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: No bootstrap required, opened a new log
I20250811 02:05:12.927115 18731 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER }
I20250811 02:05:12.927665 18731 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:12.927906 18731 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7d111deb3d0a4bab93b13f193855cef5, State: Initialized, Role: FOLLOWER
I20250811 02:05:12.928663 18731 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER }
I20250811 02:05:12.929162 18731 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:05:12.929419 18731 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:05:12.929735 18731 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:12.934024 18731 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER }
I20250811 02:05:12.934806 18731 leader_election.cc:304] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7d111deb3d0a4bab93b13f193855cef5; no voters:
I20250811 02:05:12.936520 18731 leader_election.cc:290] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 02:05:12.936774 18783 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 02:05:12.939038 18783 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 LEADER]: Becoming Leader. State: Replica: 7d111deb3d0a4bab93b13f193855cef5, State: Running, Role: LEADER
I20250811 02:05:12.939950 18783 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER }
I20250811 02:05:12.947206 18784 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7d111deb3d0a4bab93b13f193855cef5. Latest consensus state: current_term: 1 leader_uuid: "7d111deb3d0a4bab93b13f193855cef5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER } }
I20250811 02:05:12.947755 18784 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: This master's current role is: LEADER
I20250811 02:05:12.948634 18785 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7d111deb3d0a4bab93b13f193855cef5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER } }
I20250811 02:05:12.949193 18785 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: This master's current role is: LEADER
I20250811 02:05:12.960772 18731 tablet_replica.cc:331] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: stopping tablet replica
I20250811 02:05:12.961323 18731 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 LEADER]: Raft consensus shutting down.
I20250811 02:05:12.961694 18731 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250811 02:05:12.963701 18731 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250811 02:05:12.964100 18731 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250811 02:05:13.019347 18731 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250811 02:05:14.049467 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 18260
W20250811 02:05:14.082916 18413 connection.cc:537] client connection to 127.12.45.1:40133 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250811 02:05:14.083379 18413 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250811 02:05:14.083451 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 18393
I20250811 02:05:14.127424 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 18526
I20250811 02:05:14.163864 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:41513
--webserver_interface=127.12.45.62
--webserver_port=33011
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.12.45.62:41513 with env {}
W20250811 02:05:14.468735 18792 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:14.469300 18792 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:05:14.469772 18792 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:14.501097 18792 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 02:05:14.501391 18792 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:05:14.501614 18792 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 02:05:14.501825 18792 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 02:05:14.537184 18792 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.12.45.62:41513
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.12.45.62:41513
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.12.45.62
--webserver_port=33011
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:05:14.538637 18792 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:14.540284 18792 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:14.552134 18798 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:15.955770 18797 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 18792
W20250811 02:05:16.352840 18792 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.801s user 0.602s sys 1.198s
W20250811 02:05:14.552716 18799 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:16.353330 18792 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.801s user 0.603s sys 1.198s
W20250811 02:05:16.355223 18801 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:16.358705 18800 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1802 milliseconds
I20250811 02:05:16.358777 18792 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:05:16.360066 18792 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:16.362532 18792 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:16.363863 18792 hybrid_clock.cc:648] HybridClock initialized: now 1754877916363820 us; error 49 us; skew 500 ppm
I20250811 02:05:16.364691 18792 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:16.370698 18792 webserver.cc:489] Webserver started at http://127.12.45.62:33011/ using document root <none> and password file <none>
I20250811 02:05:16.371641 18792 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:16.371840 18792 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:16.379693 18792 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.003s
I20250811 02:05:16.384303 18808 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:16.385354 18792 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 02:05:16.385685 18792 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
uuid: "7d111deb3d0a4bab93b13f193855cef5"
format_stamp: "Formatted at 2025-08-11 02:05:12 on dist-test-slave-xn5f"
I20250811 02:05:16.387744 18792 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:16.439667 18792 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:16.441152 18792 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:16.441577 18792 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:16.513875 18792 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.62:41513
I20250811 02:05:16.513945 18859 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.62:41513 every 8 connection(s)
I20250811 02:05:16.516857 18792 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb
I20250811 02:05:16.518208 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 18792
I20250811 02:05:16.519814 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.1:40133
--local_ip_for_outbound_sockets=127.12.45.1
--tserver_master_addrs=127.12.45.62:41513
--webserver_port=38277
--webserver_interface=127.12.45.1
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 02:05:16.528304 18860 sys_catalog.cc:263] Verifying existing consensus state
I20250811 02:05:16.540510 18860 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Bootstrap starting.
I20250811 02:05:16.551285 18860 log.cc:826] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Log is configured to *not* fsync() on all Append() calls
I20250811 02:05:16.563661 18860 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:05:16.564504 18860 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Bootstrap complete.
I20250811 02:05:16.585453 18860 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:05:16.586230 18860 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7d111deb3d0a4bab93b13f193855cef5, State: Initialized, Role: FOLLOWER
I20250811 02:05:16.587036 18860 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:05:16.587553 18860 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 02:05:16.587815 18860 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 02:05:16.588150 18860 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:05:16.592444 18860 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:05:16.593127 18860 leader_election.cc:304] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7d111deb3d0a4bab93b13f193855cef5; no voters:
I20250811 02:05:16.595389 18860 leader_election.cc:290] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 02:05:16.595974 18864 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:05:16.599210 18864 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [term 2 LEADER]: Becoming Leader. State: Replica: 7d111deb3d0a4bab93b13f193855cef5, State: Running, Role: LEADER
I20250811 02:05:16.600170 18864 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } }
I20250811 02:05:16.601049 18860 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 02:05:16.613442 18865 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "7d111deb3d0a4bab93b13f193855cef5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } } }
I20250811 02:05:16.615666 18865 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: This master's current role is: LEADER
I20250811 02:05:16.614797 18866 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7d111deb3d0a4bab93b13f193855cef5. Latest consensus state: current_term: 2 leader_uuid: "7d111deb3d0a4bab93b13f193855cef5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7d111deb3d0a4bab93b13f193855cef5" member_type: VOTER last_known_addr { host: "127.12.45.62" port: 41513 } } }
I20250811 02:05:16.617704 18866 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5 [sys.catalog]: This master's current role is: LEADER
I20250811 02:05:16.618630 18874 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 02:05:16.631892 18874 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=8bba2c5bbbe9434197037640208cb07d]
I20250811 02:05:16.639539 18874 tablet_loader.cc:96] loaded metadata for tablet 99f93a0890d7435c9e6d36afcd715e57 (table pre_rebuild [id=8bba2c5bbbe9434197037640208cb07d])
I20250811 02:05:16.641268 18874 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 02:05:16.672191 18874 catalog_manager.cc:1349] Generated new cluster ID: f11f86523f784e79adfdb4157df9f4c7
I20250811 02:05:16.672457 18874 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 02:05:16.692976 18874 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 02:05:16.694587 18874 catalog_manager.cc:1506] Loading token signing keys...
I20250811 02:05:16.713617 18874 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Generated new TSK 0
I20250811 02:05:16.716053 18874 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250811 02:05:16.908038 18862 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:16.908563 18862 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:05:16.909139 18862 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:16.940313 18862 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:05:16.941172 18862 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.1
I20250811 02:05:16.976018 18862 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.1:40133
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.12.45.1
--webserver_port=38277
--tserver_master_addrs=127.12.45.62:41513
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.1
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:05:16.977552 18862 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:16.979226 18862 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:16.991921 18888 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:16.993233 18889 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:18.395506 18887 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 18862
W20250811 02:05:18.769697 18891 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:18.767220 18862 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.774s user 0.581s sys 1.074s
W20250811 02:05:18.770502 18862 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.777s user 0.581s sys 1.074s
W20250811 02:05:18.775489 18890 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1781 milliseconds
I20250811 02:05:18.775512 18862 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:05:18.776770 18862 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:18.778844 18862 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:18.780308 18862 hybrid_clock.cc:648] HybridClock initialized: now 1754877918780239 us; error 39 us; skew 500 ppm
I20250811 02:05:18.781466 18862 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:18.787622 18862 webserver.cc:489] Webserver started at http://127.12.45.1:38277/ using document root <none> and password file <none>
I20250811 02:05:18.788548 18862 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:18.788790 18862 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:18.796841 18862 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.000s sys 0.004s
I20250811 02:05:18.801759 18899 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:18.802961 18862 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 02:05:18.803283 18862 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
uuid: "4ffc0978d8024920b9bfc456f8de19c4"
format_stamp: "Formatted at 2025-08-11 02:05:00 on dist-test-slave-xn5f"
I20250811 02:05:18.805212 18862 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:18.863420 18862 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:18.864882 18862 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:18.865320 18862 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:18.868443 18862 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:05:18.874434 18906 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 02:05:18.885426 18862 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 02:05:18.885646 18862 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.001s sys 0.001s
I20250811 02:05:18.885928 18862 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 02:05:18.890347 18862 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 02:05:18.890530 18862 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.002s sys 0.000s
I20250811 02:05:18.891049 18906 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Bootstrap starting.
I20250811 02:05:19.101089 18862 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.1:40133
I20250811 02:05:19.101207 19012 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.1:40133 every 8 connection(s)
I20250811 02:05:19.104211 18862 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb
I20250811 02:05:19.107693 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 18862
I20250811 02:05:19.110451 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.2:39469
--local_ip_for_outbound_sockets=127.12.45.2
--tserver_master_addrs=127.12.45.62:41513
--webserver_port=44933
--webserver_interface=127.12.45.2
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 02:05:19.162652 19013 heartbeater.cc:344] Connected to a master server at 127.12.45.62:41513
I20250811 02:05:19.163460 19013 heartbeater.cc:461] Registering TS with master...
I20250811 02:05:19.165228 19013 heartbeater.cc:507] Master 127.12.45.62:41513 requested a full tablet report, sending...
I20250811 02:05:19.170620 18825 ts_manager.cc:194] Registered new tserver with Master: 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133)
I20250811 02:05:19.181100 18825 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.1:48911
I20250811 02:05:19.192888 18906 log.cc:826] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Log is configured to *not* fsync() on all Append() calls
W20250811 02:05:19.455870 19017 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:19.456413 19017 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:05:19.457049 19017 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:19.488380 19017 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:05:19.489199 19017 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.2
I20250811 02:05:19.535771 19017 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.2:39469
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.12.45.2
--webserver_port=44933
--tserver_master_addrs=127.12.45.62:41513
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.2
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:05:19.537521 19017 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:19.539654 19017 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:19.554364 19024 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:20.185771 19013 heartbeater.cc:499] Master 127.12.45.62:41513 was elected leader, sending a full tablet report...
W20250811 02:05:20.956835 19023 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 19017
W20250811 02:05:21.051529 19023 kernel_stack_watchdog.cc:198] Thread 19017 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 397ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 02:05:19.557370 19025 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:21.052695 19017 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.495s user 0.424s sys 1.032s
W20250811 02:05:21.055075 19017 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.498s user 0.424s sys 1.033s
W20250811 02:05:21.055190 19027 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:21.059410 19026 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1501 milliseconds
I20250811 02:05:21.059453 19017 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 02:05:21.060864 19017 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:21.063485 19017 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:21.064965 19017 hybrid_clock.cc:648] HybridClock initialized: now 1754877921064928 us; error 31 us; skew 500 ppm
I20250811 02:05:21.066061 19017 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:21.074013 19017 webserver.cc:489] Webserver started at http://127.12.45.2:44933/ using document root <none> and password file <none>
I20250811 02:05:21.075330 19017 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:21.075636 19017 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:21.086318 19017 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.005s sys 0.001s
I20250811 02:05:21.092156 19034 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:21.093423 19017 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 02:05:21.093806 19017 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
uuid: "72051acd86da47b79688091dcfdec9e1"
format_stamp: "Formatted at 2025-08-11 02:05:02 on dist-test-slave-xn5f"
I20250811 02:05:21.096570 19017 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:21.152976 19017 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:21.154469 19017 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:21.154919 19017 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:21.158105 19017 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:05:21.164863 19041 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 02:05:21.172585 19017 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 02:05:21.172888 19017 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.003s sys 0.000s
I20250811 02:05:21.173282 19017 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 02:05:21.180467 19017 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 02:05:21.180745 19017 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.007s sys 0.000s
I20250811 02:05:21.181842 19041 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Bootstrap starting.
I20250811 02:05:21.357210 19017 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.2:39469
I20250811 02:05:21.357318 19147 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.2:39469 every 8 connection(s)
I20250811 02:05:21.360136 19017 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb
I20250811 02:05:21.363580 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 19017
I20250811 02:05:21.365500 12468 external_mini_cluster.cc:1366] Running /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
/tmp/dist-test-task4YJXFh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.12.45.3:43595
--local_ip_for_outbound_sockets=127.12.45.3
--tserver_master_addrs=127.12.45.62:41513
--webserver_port=37237
--webserver_interface=127.12.45.3
--builtin_ntp_servers=127.12.45.20:43419
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 02:05:21.394286 19148 heartbeater.cc:344] Connected to a master server at 127.12.45.62:41513
I20250811 02:05:21.394793 19148 heartbeater.cc:461] Registering TS with master...
I20250811 02:05:21.396052 19148 heartbeater.cc:507] Master 127.12.45.62:41513 requested a full tablet report, sending...
I20250811 02:05:21.400117 18825 ts_manager.cc:194] Registered new tserver with Master: 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469)
I20250811 02:05:21.403446 18825 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.2:58713
I20250811 02:05:21.527740 19041 log.cc:826] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Log is configured to *not* fsync() on all Append() calls
W20250811 02:05:21.727600 19152 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 02:05:21.728122 19152 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 02:05:21.728621 19152 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 02:05:21.759420 19152 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 02:05:21.760249 19152 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.12.45.3
I20250811 02:05:21.795650 19152 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.12.45.20:43419
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.12.45.3:43595
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.12.45.3
--webserver_port=37237
--tserver_master_addrs=127.12.45.62:41513
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.12.45.3
--log_dir=/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 11 Aug 2025 01:59:10 UTC on 5fd53c4cbb9d
build id 7509
TSAN enabled
I20250811 02:05:21.797026 19152 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 02:05:21.798627 19152 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 02:05:21.813050 19160 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:22.407229 19148 heartbeater.cc:499] Master 127.12.45.62:41513 was elected leader, sending a full tablet report...
I20250811 02:05:22.737381 18906 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:05:22.739228 18906 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Bootstrap complete.
I20250811 02:05:22.742072 18906 ts_tablet_manager.cc:1397] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Time spent bootstrapping tablet: real 3.851s user 3.531s sys 0.104s
I20250811 02:05:22.768019 18906 raft_consensus.cc:357] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:22.772814 18906 raft_consensus.cc:738] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4ffc0978d8024920b9bfc456f8de19c4, State: Initialized, Role: FOLLOWER
I20250811 02:05:22.774458 18906 consensus_queue.cc:260] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:22.784988 18906 ts_tablet_manager.cc:1428] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Time spent starting tablet: real 0.042s user 0.039s sys 0.000s
W20250811 02:05:23.214113 19158 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 19152
W20250811 02:05:23.284726 19152 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.470s user 0.482s sys 0.876s
W20250811 02:05:21.823523 19159 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 02:05:23.285671 19152 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.471s user 0.482s sys 0.876s
I20250811 02:05:23.286037 19152 server_base.cc:1047] running on GCE node
W20250811 02:05:23.287117 19162 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 02:05:23.288295 19152 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 02:05:23.290398 19152 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 02:05:23.291747 19152 hybrid_clock.cc:648] HybridClock initialized: now 1754877923291709 us; error 30 us; skew 500 ppm
I20250811 02:05:23.292901 19152 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 02:05:23.299090 19152 webserver.cc:489] Webserver started at http://127.12.45.3:37237/ using document root <none> and password file <none>
I20250811 02:05:23.300067 19152 fs_manager.cc:362] Metadata directory not provided
I20250811 02:05:23.300297 19152 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 02:05:23.308459 19152 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.001s
I20250811 02:05:23.313232 19170 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 02:05:23.314278 19152 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 02:05:23.314559 19152 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data,/tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
uuid: "f3b2fba7dba94bab909e0f263b9edf6b"
format_stamp: "Formatted at 2025-08-11 02:05:03 on dist-test-slave-xn5f"
I20250811 02:05:23.316520 19152 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 02:05:23.373827 19152 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 02:05:23.375828 19152 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 02:05:23.376395 19152 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 02:05:23.379565 19152 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 02:05:23.385567 19177 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 02:05:23.396486 19152 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 02:05:23.396697 19152 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.001s sys 0.001s
I20250811 02:05:23.396986 19152 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 02:05:23.401417 19152 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 02:05:23.401608 19152 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.000s
I20250811 02:05:23.402081 19177 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Bootstrap starting.
I20250811 02:05:23.617089 19152 rpc_server.cc:307] RPC server started. Bound to: 127.12.45.3:43595
I20250811 02:05:23.617455 19283 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.12.45.3:43595 every 8 connection(s)
I20250811 02:05:23.620771 19152 server_base.cc:1179] Dumped server information to /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb
I20250811 02:05:23.624058 12468 external_mini_cluster.cc:1428] Started /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu as pid 19152
I20250811 02:05:23.665717 19284 heartbeater.cc:344] Connected to a master server at 127.12.45.62:41513
I20250811 02:05:23.666183 19284 heartbeater.cc:461] Registering TS with master...
I20250811 02:05:23.667475 19284 heartbeater.cc:507] Master 127.12.45.62:41513 requested a full tablet report, sending...
I20250811 02:05:23.671450 18824 ts_manager.cc:194] Registered new tserver with Master: f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595)
I20250811 02:05:23.674542 18824 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.12.45.3:60385
I20250811 02:05:23.681120 12468 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 02:05:23.723407 19177 log.cc:826] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Log is configured to *not* fsync() on all Append() calls
I20250811 02:05:24.280700 19041 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:05:24.281502 19041 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Bootstrap complete.
I20250811 02:05:24.282866 19041 ts_tablet_manager.cc:1397] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Time spent bootstrapping tablet: real 3.102s user 3.007s sys 0.047s
I20250811 02:05:24.293725 19041 raft_consensus.cc:357] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:24.295737 19041 raft_consensus.cc:738] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 72051acd86da47b79688091dcfdec9e1, State: Initialized, Role: FOLLOWER
I20250811 02:05:24.296471 19041 consensus_queue.cc:260] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:24.299537 19041 ts_tablet_manager.cc:1428] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Time spent starting tablet: real 0.016s user 0.015s sys 0.001s
I20250811 02:05:24.541334 19298 raft_consensus.cc:491] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:05:24.541810 19298 raft_consensus.cc:513] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:24.544085 19298 leader_election.cc:290] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469), f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595)
I20250811 02:05:24.570108 19103 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "4ffc0978d8024920b9bfc456f8de19c4" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "72051acd86da47b79688091dcfdec9e1" is_pre_election: true
I20250811 02:05:24.571087 19103 raft_consensus.cc:2466] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 4ffc0978d8024920b9bfc456f8de19c4 in term 1.
I20250811 02:05:24.572710 18901 leader_election.cc:304] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4ffc0978d8024920b9bfc456f8de19c4, 72051acd86da47b79688091dcfdec9e1; no voters:
I20250811 02:05:24.573632 19298 raft_consensus.cc:2802] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250811 02:05:24.573990 19298 raft_consensus.cc:491] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:05:24.574321 19298 raft_consensus.cc:3058] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Advancing to term 2
I20250811 02:05:24.567180 19239 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "4ffc0978d8024920b9bfc456f8de19c4" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" is_pre_election: true
W20250811 02:05:24.577704 18901 leader_election.cc:343] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595): Illegal state: must be running to vote when last-logged opid is not known
I20250811 02:05:24.583578 19298 raft_consensus.cc:513] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:24.585443 19298 leader_election.cc:290] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [CANDIDATE]: Term 2 election: Requested vote from peers 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469), f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595)
I20250811 02:05:24.586141 19103 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "4ffc0978d8024920b9bfc456f8de19c4" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "72051acd86da47b79688091dcfdec9e1"
I20250811 02:05:24.586437 19239 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "99f93a0890d7435c9e6d36afcd715e57" candidate_uuid: "4ffc0978d8024920b9bfc456f8de19c4" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "f3b2fba7dba94bab909e0f263b9edf6b"
I20250811 02:05:24.586623 19103 raft_consensus.cc:3058] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Advancing to term 2
W20250811 02:05:24.587479 18901 leader_election.cc:343] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595): Illegal state: must be running to vote when last-logged opid is not known
I20250811 02:05:24.592849 19103 raft_consensus.cc:2466] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 4ffc0978d8024920b9bfc456f8de19c4 in term 2.
I20250811 02:05:24.593676 18901 leader_election.cc:304] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 4ffc0978d8024920b9bfc456f8de19c4, 72051acd86da47b79688091dcfdec9e1; no voters: f3b2fba7dba94bab909e0f263b9edf6b
I20250811 02:05:24.594385 19298 raft_consensus.cc:2802] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 02:05:24.595826 19298 raft_consensus.cc:695] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 LEADER]: Becoming Leader. State: Replica: 4ffc0978d8024920b9bfc456f8de19c4, State: Running, Role: LEADER
I20250811 02:05:24.596557 19298 consensus_queue.cc:237] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:24.608564 18824 catalog_manager.cc:5582] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 reported cstate change: term changed from 0 to 2, leader changed from <none> to 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1), VOTER 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1) added, VOTER 72051acd86da47b79688091dcfdec9e1 (127.12.45.2) added, VOTER f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3) added. New cstate: current_term: 2 leader_uuid: "4ffc0978d8024920b9bfc456f8de19c4" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } health_report { overall_health: UNKNOWN } } }
I20250811 02:05:24.678640 19284 heartbeater.cc:499] Master 127.12.45.62:41513 was elected leader, sending a full tablet report...
W20250811 02:05:25.091099 12468 scanner-internal.cc:458] Time spent opening tablet: real 1.371s user 0.005s sys 0.002s
W20250811 02:05:25.148448 18901 consensus_peers.cc:489] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 -> Peer f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595): Couldn't send request to peer f3b2fba7dba94bab909e0f263b9edf6b. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
I20250811 02:05:25.212337 19103 raft_consensus.cc:1273] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 2 FOLLOWER]: Refusing update from remote peer 4ffc0978d8024920b9bfc456f8de19c4: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250811 02:05:25.215250 19313 consensus_queue.cc:1035] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [LEADER]: Connected to new peer: Peer: permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
I20250811 02:05:25.318604 18968 consensus_queue.cc:237] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:25.323889 19102 raft_consensus.cc:1273] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 2 FOLLOWER]: Refusing update from remote peer 4ffc0978d8024920b9bfc456f8de19c4: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250811 02:05:25.325348 19314 consensus_queue.cc:1035] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [LEADER]: Connected to new peer: Peer: permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.001s
I20250811 02:05:25.331751 19314 raft_consensus.cc:2953] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } }
I20250811 02:05:25.335896 19102 raft_consensus.cc:2953] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } }
I20250811 02:05:25.350673 18812 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 99f93a0890d7435c9e6d36afcd715e57 with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 02:05:25.356451 18824 catalog_manager.cc:5582] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 reported cstate change: config changed from index -1 to 207, VOTER f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3) evicted. New cstate: current_term: 2 leader_uuid: "4ffc0978d8024920b9bfc456f8de19c4" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } health_report { overall_health: HEALTHY } } }
I20250811 02:05:25.413763 18968 consensus_queue.cc:237] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:25.416253 19314 raft_consensus.cc:2953] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER 72051acd86da47b79688091dcfdec9e1 (127.12.45.2) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } }
I20250811 02:05:25.424445 18812 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 99f93a0890d7435c9e6d36afcd715e57 with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 02:05:25.428333 18824 catalog_manager.cc:5582] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 reported cstate change: config changed from index 207 to 208, VOTER 72051acd86da47b79688091dcfdec9e1 (127.12.45.2) evicted. New cstate: current_term: 2 leader_uuid: "4ffc0978d8024920b9bfc456f8de19c4" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } health_report { overall_health: HEALTHY } } }
I20250811 02:05:25.457396 19219 tablet_service.cc:1515] Processing DeleteTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 with delete_type TABLET_DATA_TOMBSTONED (TS f3b2fba7dba94bab909e0f263b9edf6b not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:46890
W20250811 02:05:25.462348 18810 catalog_manager.cc:4908] TS f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595): delete failed for tablet 99f93a0890d7435c9e6d36afcd715e57 because tablet deleting was already in progress. No further retry: Already present: State transition of tablet 99f93a0890d7435c9e6d36afcd715e57 already in progress: opening tablet
I20250811 02:05:25.467845 19083 tablet_service.cc:1515] Processing DeleteTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 with delete_type TABLET_DATA_TOMBSTONED (TS 72051acd86da47b79688091dcfdec9e1 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:35234
I20250811 02:05:25.476084 19328 tablet_replica.cc:331] stopping tablet replica
I20250811 02:05:25.481283 19328 raft_consensus.cc:2241] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250811 02:05:25.482149 19328 raft_consensus.cc:2270] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250811 02:05:25.541651 19328 ts_tablet_manager.cc:1905] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 02:05:25.557024 19328 ts_tablet_manager.cc:1918] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250811 02:05:25.557452 19328 log.cc:1199] T 99f93a0890d7435c9e6d36afcd715e57 P 72051acd86da47b79688091dcfdec9e1: Deleting WAL directory at /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/wal/wals/99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:25.559300 18810 catalog_manager.cc:4928] TS 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469): tablet 99f93a0890d7435c9e6d36afcd715e57 (table pre_rebuild [id=8bba2c5bbbe9434197037640208cb07d]) successfully deleted
I20250811 02:05:25.962119 19219 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:05:25.966486 18948 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:05:25.995815 19083 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:05:26.168460 19177 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 02:05:26.179857 19177 tablet_bootstrap.cc:492] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Bootstrap complete.
I20250811 02:05:26.181739 19177 ts_tablet_manager.cc:1397] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Time spent bootstrapping tablet: real 2.780s user 2.608s sys 0.056s
I20250811 02:05:26.190325 19177 raft_consensus.cc:357] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:26.203136 19177 raft_consensus.cc:738] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: f3b2fba7dba94bab909e0f263b9edf6b, State: Initialized, Role: FOLLOWER
I20250811 02:05:26.204174 19177 consensus_queue.cc:260] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } }
I20250811 02:05:26.217056 19177 ts_tablet_manager.cc:1428] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Time spent starting tablet: real 0.035s user 0.014s sys 0.004s
I20250811 02:05:26.222366 19219 tablet_service.cc:1515] Processing DeleteTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 with delete_type TABLET_DATA_TOMBSTONED (Replica has no consensus available (current committed config index is 208)) from {username='slave'} at 127.0.0.1:46890
I20250811 02:05:26.240424 19355 tablet_replica.cc:331] stopping tablet replica
I20250811 02:05:26.241339 19355 raft_consensus.cc:2241] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Raft consensus shutting down.
I20250811 02:05:26.241950 19355 raft_consensus.cc:2270] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Raft consensus is shut down!
I20250811 02:05:26.273507 19355 ts_tablet_manager.cc:1905] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 02:05:26.291684 19355 ts_tablet_manager.cc:1918] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.205
I20250811 02:05:26.292166 19355 log.cc:1199] T 99f93a0890d7435c9e6d36afcd715e57 P f3b2fba7dba94bab909e0f263b9edf6b: Deleting WAL directory at /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/wal/wals/99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:26.294051 18810 catalog_manager.cc:4928] TS f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3:43595): tablet 99f93a0890d7435c9e6d36afcd715e57 (table pre_rebuild [id=8bba2c5bbbe9434197037640208cb07d]) successfully deleted
Master Summary
UUID | Address | Status
----------------------------------+--------------------+---------
7d111deb3d0a4bab93b13f193855cef5 | 127.12.45.62:41513 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.12.45.20:43419 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
4ffc0978d8024920b9bfc456f8de19c4 | 127.12.45.1:40133 | HEALTHY | <none> | 1 | 0
72051acd86da47b79688091dcfdec9e1 | 127.12.45.2:39469 | HEALTHY | <none> | 0 | 0
f3b2fba7dba94bab909e0f263b9edf6b | 127.12.45.3:43595 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.12.45.1 | experimental | 127.12.45.1:40133
local_ip_for_outbound_sockets | 127.12.45.2 | experimental | 127.12.45.2:39469
local_ip_for_outbound_sockets | 127.12.45.3 | experimental | 127.12.45.3:43595
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb | hidden | 127.12.45.1:40133
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb | hidden | 127.12.45.2:39469
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb | hidden | 127.12.45.3:43595
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.12.45.20:43419 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250811 02:05:26.364106 12468 log_verifier.cc:126] Checking tablet 99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:26.615607 12468 log_verifier.cc:177] Verified matching terms for 208 ops in tablet 99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:26.618090 18825 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:43536:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250811 02:05:26.618582 18825 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:43536:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250811 02:05:26.630651 18825 catalog_manager.cc:5869] T 00000000000000000000000000000000 P 7d111deb3d0a4bab93b13f193855cef5: Sending DeleteTablet for 1 replicas of tablet 99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:26.632637 12468 test_util.cc:276] Using random seed: 1534127261
I20250811 02:05:26.632553 18948 tablet_service.cc:1515] Processing DeleteTablet for tablet 99f93a0890d7435c9e6d36afcd715e57 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-08-11 02:05:26 UTC) from {username='slave'} at 127.0.0.1:51418
I20250811 02:05:26.643784 19360 tablet_replica.cc:331] stopping tablet replica
I20250811 02:05:26.646026 19360 raft_consensus.cc:2241] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 LEADER]: Raft consensus shutting down.
I20250811 02:05:26.646864 19360 raft_consensus.cc:2270] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250811 02:05:26.686051 19360 ts_tablet_manager.cc:1905] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250811 02:05:26.694329 18825 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:49956:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250811 02:05:26.697968 18825 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 02:05:26.700968 19360 ts_tablet_manager.cc:1918] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250811 02:05:26.701370 19360 log.cc:1199] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Deleting WAL directory at /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/wal/wals/99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:26.702652 19360 ts_tablet_manager.cc:1939] T 99f93a0890d7435c9e6d36afcd715e57 P 4ffc0978d8024920b9bfc456f8de19c4: Deleting consensus metadata
I20250811 02:05:26.705349 18812 catalog_manager.cc:4928] TS 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133): tablet 99f93a0890d7435c9e6d36afcd715e57 (table pre_rebuild [id=8bba2c5bbbe9434197037640208cb07d]) successfully deleted
I20250811 02:05:26.726605 19083 tablet_service.cc:1468] Processing CreateTablet for tablet 46fcaf48ed174313988fb32f85e1f6d5 (DEFAULT_TABLE table=post_rebuild [id=72541b6f45554145b4e54baa2a69a42c]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:05:26.727970 19083 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 46fcaf48ed174313988fb32f85e1f6d5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:26.727679 18948 tablet_service.cc:1468] Processing CreateTablet for tablet 46fcaf48ed174313988fb32f85e1f6d5 (DEFAULT_TABLE table=post_rebuild [id=72541b6f45554145b4e54baa2a69a42c]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:05:26.729001 18948 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 46fcaf48ed174313988fb32f85e1f6d5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:26.729655 19219 tablet_service.cc:1468] Processing CreateTablet for tablet 46fcaf48ed174313988fb32f85e1f6d5 (DEFAULT_TABLE table=post_rebuild [id=72541b6f45554145b4e54baa2a69a42c]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 02:05:26.730901 19219 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 46fcaf48ed174313988fb32f85e1f6d5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 02:05:26.748940 19368 tablet_bootstrap.cc:492] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1: Bootstrap starting.
I20250811 02:05:26.760757 19368 tablet_bootstrap.cc:654] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:26.762478 19369 tablet_bootstrap.cc:492] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b: Bootstrap starting.
I20250811 02:05:26.764128 19367 tablet_bootstrap.cc:492] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4: Bootstrap starting.
I20250811 02:05:26.768801 19367 tablet_bootstrap.cc:654] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:26.772219 19369 tablet_bootstrap.cc:654] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b: Neither blocks nor log segments found. Creating new log.
I20250811 02:05:26.781180 19367 tablet_bootstrap.cc:492] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4: No bootstrap required, opened a new log
I20250811 02:05:26.781679 19367 ts_tablet_manager.cc:1397] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4: Time spent bootstrapping tablet: real 0.018s user 0.005s sys 0.011s
I20250811 02:05:26.782334 19368 tablet_bootstrap.cc:492] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1: No bootstrap required, opened a new log
I20250811 02:05:26.782332 19369 tablet_bootstrap.cc:492] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b: No bootstrap required, opened a new log
I20250811 02:05:26.782773 19368 ts_tablet_manager.cc:1397] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1: Time spent bootstrapping tablet: real 0.034s user 0.016s sys 0.004s
I20250811 02:05:26.782904 19369 ts_tablet_manager.cc:1397] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b: Time spent bootstrapping tablet: real 0.021s user 0.014s sys 0.000s
I20250811 02:05:26.784673 19367 raft_consensus.cc:357] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.785410 19367 raft_consensus.cc:383] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:26.785753 19367 raft_consensus.cc:738] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4ffc0978d8024920b9bfc456f8de19c4, State: Initialized, Role: FOLLOWER
I20250811 02:05:26.785528 19368 raft_consensus.cc:357] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.785574 19369 raft_consensus.cc:357] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.786230 19368 raft_consensus.cc:383] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:26.786314 19369 raft_consensus.cc:383] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 02:05:26.786551 19368 raft_consensus.cc:738] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 72051acd86da47b79688091dcfdec9e1, State: Initialized, Role: FOLLOWER
I20250811 02:05:26.786664 19369 raft_consensus.cc:738] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f3b2fba7dba94bab909e0f263b9edf6b, State: Initialized, Role: FOLLOWER
I20250811 02:05:26.786759 19367 consensus_queue.cc:260] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.787303 19368 consensus_queue.cc:260] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.787473 19369 consensus_queue.cc:260] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.797029 19367 ts_tablet_manager.cc:1428] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4: Time spent starting tablet: real 0.015s user 0.008s sys 0.003s
I20250811 02:05:26.800628 19368 ts_tablet_manager.cc:1428] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1: Time spent starting tablet: real 0.018s user 0.005s sys 0.011s
I20250811 02:05:26.804737 19369 ts_tablet_manager.cc:1428] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b: Time spent starting tablet: real 0.021s user 0.000s sys 0.015s
I20250811 02:05:26.824698 19375 raft_consensus.cc:491] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 02:05:26.825157 19375 raft_consensus.cc:513] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.827549 19375 leader_election.cc:290] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469), 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133)
I20250811 02:05:26.858592 19102 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "46fcaf48ed174313988fb32f85e1f6d5" candidate_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "72051acd86da47b79688091dcfdec9e1" is_pre_election: true
I20250811 02:05:26.859261 19102 raft_consensus.cc:2466] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3b2fba7dba94bab909e0f263b9edf6b in term 0.
I20250811 02:05:26.859473 18968 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "46fcaf48ed174313988fb32f85e1f6d5" candidate_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ffc0978d8024920b9bfc456f8de19c4" is_pre_election: true
I20250811 02:05:26.860141 18968 raft_consensus.cc:2466] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3b2fba7dba94bab909e0f263b9edf6b in term 0.
I20250811 02:05:26.860476 19172 leader_election.cc:304] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 72051acd86da47b79688091dcfdec9e1, f3b2fba7dba94bab909e0f263b9edf6b; no voters:
I20250811 02:05:26.861340 19375 raft_consensus.cc:2802] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 02:05:26.861689 19375 raft_consensus.cc:491] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 02:05:26.862032 19375 raft_consensus.cc:3058] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:26.868664 19375 raft_consensus.cc:513] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
I20250811 02:05:26.870537 19375 leader_election.cc:290] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [CANDIDATE]: Term 1 election: Requested vote from peers 72051acd86da47b79688091dcfdec9e1 (127.12.45.2:39469), 4ffc0978d8024920b9bfc456f8de19c4 (127.12.45.1:40133)
I20250811 02:05:26.871366 19102 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "46fcaf48ed174313988fb32f85e1f6d5" candidate_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "72051acd86da47b79688091dcfdec9e1"
I20250811 02:05:26.871802 19102 raft_consensus.cc:3058] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:26.871680 18968 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "46fcaf48ed174313988fb32f85e1f6d5" candidate_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ffc0978d8024920b9bfc456f8de19c4"
I20250811 02:05:26.872220 18968 raft_consensus.cc:3058] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 0 FOLLOWER]: Advancing to term 1
I20250811 02:05:26.875954 19102 raft_consensus.cc:2466] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3b2fba7dba94bab909e0f263b9edf6b in term 1.
I20250811 02:05:26.876746 19172 leader_election.cc:304] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 72051acd86da47b79688091dcfdec9e1, f3b2fba7dba94bab909e0f263b9edf6b; no voters:
I20250811 02:05:26.877317 19375 raft_consensus.cc:2802] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 FOLLOWER]: Leader election won for term 1
W20250811 02:05:26.878506 19149 tablet.cc:2378] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:05:26.878818 18968 raft_consensus.cc:2466] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3b2fba7dba94bab909e0f263b9edf6b in term 1.
I20250811 02:05:26.882012 19375 raft_consensus.cc:695] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [term 1 LEADER]: Becoming Leader. State: Replica: f3b2fba7dba94bab909e0f263b9edf6b, State: Running, Role: LEADER
I20250811 02:05:26.882711 19375 consensus_queue.cc:237] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } }
W20250811 02:05:26.889904 19285 tablet.cc:2378] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 02:05:26.895686 19014 tablet.cc:2378] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 02:05:26.895407 18824 catalog_manager.cc:5582] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b reported cstate change: term changed from 0 to 1, leader changed from <none> to f3b2fba7dba94bab909e0f263b9edf6b (127.12.45.3). New cstate: current_term: 1 leader_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f3b2fba7dba94bab909e0f263b9edf6b" member_type: VOTER last_known_addr { host: "127.12.45.3" port: 43595 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 } health_report { overall_health: UNKNOWN } } }
I20250811 02:05:27.001699 19102 raft_consensus.cc:1273] T 46fcaf48ed174313988fb32f85e1f6d5 P 72051acd86da47b79688091dcfdec9e1 [term 1 FOLLOWER]: Refusing update from remote peer f3b2fba7dba94bab909e0f263b9edf6b: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 02:05:27.003038 18968 raft_consensus.cc:1273] T 46fcaf48ed174313988fb32f85e1f6d5 P 4ffc0978d8024920b9bfc456f8de19c4 [term 1 FOLLOWER]: Refusing update from remote peer f3b2fba7dba94bab909e0f263b9edf6b: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 02:05:27.003564 19375 consensus_queue.cc:1035] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [LEADER]: Connected to new peer: Peer: permanent_uuid: "72051acd86da47b79688091dcfdec9e1" member_type: VOTER last_known_addr { host: "127.12.45.2" port: 39469 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:05:27.005007 19375 consensus_queue.cc:1035] T 46fcaf48ed174313988fb32f85e1f6d5 P f3b2fba7dba94bab909e0f263b9edf6b [LEADER]: Connected to new peer: Peer: permanent_uuid: "4ffc0978d8024920b9bfc456f8de19c4" member_type: VOTER last_known_addr { host: "127.12.45.1" port: 40133 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 02:05:27.039577 19390 mvcc.cc:204] Tried to move back new op lower bound from 7187979988982181888 to 7187979988518506496. Current Snapshot: MvccSnapshot[applied={T|T < 7187979988982181888}]
W20250811 02:05:31.244323 18856 debug-util.cc:398] Leaking SignalData structure 0x7b080008c120 after lost signal to thread 18793
W20250811 02:05:31.244994 18856 debug-util.cc:398] Leaking SignalData structure 0x7b08000a9780 after lost signal to thread 18859
I20250811 02:05:32.117995 18948 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 02:05:32.142632 19219 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 02:05:32.175266 19083 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+--------------------+---------
7d111deb3d0a4bab93b13f193855cef5 | 127.12.45.62:41513 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.12.45.20:43419 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
4ffc0978d8024920b9bfc456f8de19c4 | 127.12.45.1:40133 | HEALTHY | <none> | 0 | 0
72051acd86da47b79688091dcfdec9e1 | 127.12.45.2:39469 | HEALTHY | <none> | 0 | 0
f3b2fba7dba94bab909e0f263b9edf6b | 127.12.45.3:43595 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.12.45.1 | experimental | 127.12.45.1:40133
local_ip_for_outbound_sockets | 127.12.45.2 | experimental | 127.12.45.2:39469
local_ip_for_outbound_sockets | 127.12.45.3 | experimental | 127.12.45.3:43595
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-0/data/info.pb | hidden | 127.12.45.1:40133
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-1/data/info.pb | hidden | 127.12.45.2:39469
server_dump_info_path | /tmp/dist-test-task4YJXFh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754877736375003-12468-0/minicluster-data/ts-2/data/info.pb | hidden | 127.12.45.3:43595
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.12.45.20:43419 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250811 02:05:32.484082 12468 log_verifier.cc:126] Checking tablet 46fcaf48ed174313988fb32f85e1f6d5
I20250811 02:05:33.267683 12468 log_verifier.cc:177] Verified matching terms for 205 ops in tablet 46fcaf48ed174313988fb32f85e1f6d5
I20250811 02:05:33.268576 12468 log_verifier.cc:126] Checking tablet 99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:33.268865 12468 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 99f93a0890d7435c9e6d36afcd715e57
I20250811 02:05:33.294756 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 18862
I20250811 02:05:33.358052 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 19017
I20250811 02:05:33.397380 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 19152
I20250811 02:05:33.439095 12468 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task4YJXFh/build/tsan/bin/kudu with pid 18792
2025-08-11T02:05:33Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (37231 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (37231 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (197037 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST
I20250811 02:05:33.503609 12468 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/meta_cache.cc:302: suppressed but not reported on 2 messages since previous log ~51 seconds ago