Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250623 14:05:45.125505 2360 test_util.cc:276] Using random seed: -1254266379
W20250623 14:05:46.350239 2360 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.180s user 0.434s sys 0.746s
W20250623 14:05:46.350605 2360 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.181s user 0.434s sys 0.746s
I20250623 14:05:46.352922 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:05:46.353102 2360 ts_itest-base.cc:116] --------------
I20250623 14:05:46.353266 2360 ts_itest-base.cc:117] 4 tablet servers
I20250623 14:05:46.353446 2360 ts_itest-base.cc:118] 3 replicas per TS
I20250623 14:05:46.353610 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:05:46Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:05:46Z Disabled control of system clock
I20250623 14:05:46.389355 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:45007
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:34013
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:45007 with env {}
W20250623 14:05:46.684697 2374 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:46.685312 2374 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:05:46.685811 2374 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:46.717444 2374 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:05:46.717723 2374 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:05:46.718016 2374 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:05:46.718240 2374 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:05:46.753522 2374 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34013
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:45007
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:45007
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:05:46.754830 2374 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:05:46.756421 2374 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:05:46.771018 2383 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:46.771075 2380 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:46.771068 2381 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:46.771843 2374 server_base.cc:1048] running on GCE node
I20250623 14:05:47.916141 2374 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:05:47.918598 2374 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:05:47.919945 2374 hybrid_clock.cc:648] HybridClock initialized: now 1750687547919924 us; error 38 us; skew 500 ppm
I20250623 14:05:47.920733 2374 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:05:47.928634 2374 webserver.cc:469] Webserver started at http://127.2.78.62:37843/ using document root <none> and password file <none>
I20250623 14:05:47.931318 2374 fs_manager.cc:362] Metadata directory not provided
I20250623 14:05:47.931656 2374 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:05:47.932343 2374 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:05:47.939805 2374 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "824bbcb72f3a4646978d08b8ffb24673"
format_stamp: "Formatted at 2025-06-23 14:05:47 on dist-test-slave-stbh"
I20250623 14:05:47.941300 2374 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "824bbcb72f3a4646978d08b8ffb24673"
format_stamp: "Formatted at 2025-06-23 14:05:47 on dist-test-slave-stbh"
I20250623 14:05:47.948853 2374 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.001s sys 0.004s
I20250623 14:05:47.956779 2390 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:47.958031 2374 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.005s sys 0.001s
I20250623 14:05:47.958452 2374 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "824bbcb72f3a4646978d08b8ffb24673"
format_stamp: "Formatted at 2025-06-23 14:05:47 on dist-test-slave-stbh"
I20250623 14:05:47.958884 2374 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:05:48.022156 2374 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:05:48.024766 2374 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:05:48.025550 2374 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:05:48.100109 2374 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:45007
I20250623 14:05:48.100186 2441 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:45007 every 8 connection(s)
I20250623 14:05:48.102756 2374 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:05:48.107153 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 2374
I20250623 14:05:48.107686 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:05:48.108212 2442 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:05:48.127259 2442 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Bootstrap starting.
I20250623 14:05:48.132908 2442 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Neither blocks nor log segments found. Creating new log.
I20250623 14:05:48.135056 2442 log.cc:826] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Log is configured to *not* fsync() on all Append() calls
I20250623 14:05:48.140748 2442 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: No bootstrap required, opened a new log
I20250623 14:05:48.159099 2442 raft_consensus.cc:357] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:48.159714 2442 raft_consensus.cc:383] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:05:48.160001 2442 raft_consensus.cc:738] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 824bbcb72f3a4646978d08b8ffb24673, State: Initialized, Role: FOLLOWER
I20250623 14:05:48.160745 2442 consensus_queue.cc:260] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:48.161268 2442 raft_consensus.cc:397] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:05:48.161521 2442 raft_consensus.cc:491] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:05:48.161820 2442 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:05:48.165864 2442 raft_consensus.cc:513] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:48.166486 2442 leader_election.cc:304] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 824bbcb72f3a4646978d08b8ffb24673; no voters:
I20250623 14:05:48.167936 2442 leader_election.cc:290] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:05:48.168562 2447 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:05:48.170630 2447 raft_consensus.cc:695] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 LEADER]: Becoming Leader. State: Replica: 824bbcb72f3a4646978d08b8ffb24673, State: Running, Role: LEADER
I20250623 14:05:48.171227 2447 consensus_queue.cc:237] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:48.171648 2442 sys_catalog.cc:564] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:05:48.180435 2448 sys_catalog.cc:455] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "824bbcb72f3a4646978d08b8ffb24673" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } } }
I20250623 14:05:48.181082 2448 sys_catalog.cc:458] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: This master's current role is: LEADER
I20250623 14:05:48.180359 2449 sys_catalog.cc:455] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 824bbcb72f3a4646978d08b8ffb24673. Latest consensus state: current_term: 1 leader_uuid: "824bbcb72f3a4646978d08b8ffb24673" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } } }
I20250623 14:05:48.182019 2449 sys_catalog.cc:458] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: This master's current role is: LEADER
I20250623 14:05:48.187139 2456 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:05:48.199010 2456 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:05:48.214416 2456 catalog_manager.cc:1349] Generated new cluster ID: 5b17e5f5716e4dfdb25d0c63c754d132
I20250623 14:05:48.214746 2456 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:05:48.246762 2456 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:05:48.248277 2456 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:05:48.259883 2456 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Generated new TSK 0
I20250623 14:05:48.260775 2456 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:05:48.274253 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--builtin_ntp_servers=127.2.78.20:34013
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250623 14:05:48.573017 2466 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:48.573554 2466 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:05:48.574086 2466 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:48.606320 2466 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:05:48.607194 2466 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:05:48.643388 2466 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34013
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:05:48.644827 2466 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:05:48.646517 2466 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:05:48.666095 2473 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:48.668241 2475 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:48.666306 2472 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:49.862936 2466 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250623 14:05:49.862891 2474 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:05:49.867486 2466 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:05:49.870189 2466 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:05:49.871631 2466 hybrid_clock.cc:648] HybridClock initialized: now 1750687549871573 us; error 75 us; skew 500 ppm
I20250623 14:05:49.872414 2466 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:05:49.913302 2466 webserver.cc:469] Webserver started at http://127.2.78.1:36851/ using document root <none> and password file <none>
I20250623 14:05:49.914270 2466 fs_manager.cc:362] Metadata directory not provided
I20250623 14:05:49.914475 2466 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:05:49.914973 2466 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:05:49.919374 2466 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "9263ef5a135540069e51faf8549ab82c"
format_stamp: "Formatted at 2025-06-23 14:05:49 on dist-test-slave-stbh"
I20250623 14:05:49.920446 2466 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "9263ef5a135540069e51faf8549ab82c"
format_stamp: "Formatted at 2025-06-23 14:05:49 on dist-test-slave-stbh"
I20250623 14:05:49.928014 2466 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.004s
I20250623 14:05:49.934011 2482 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:49.935148 2466 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250623 14:05:49.935444 2466 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "9263ef5a135540069e51faf8549ab82c"
format_stamp: "Formatted at 2025-06-23 14:05:49 on dist-test-slave-stbh"
I20250623 14:05:49.935746 2466 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:05:49.995134 2466 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:05:49.996855 2466 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:05:49.997277 2466 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:05:49.999737 2466 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:05:50.003728 2466 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:05:50.003916 2466 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:50.004153 2466 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:05:50.004333 2466 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:50.138042 2466 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:37065
I20250623 14:05:50.138154 2595 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:37065 every 8 connection(s)
I20250623 14:05:50.141047 2466 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:05:50.150344 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 2466
I20250623 14:05:50.150887 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:05:50.157145 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--builtin_ntp_servers=127.2.78.20:34013
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:05:50.164363 2596 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:50.164822 2596 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:50.165892 2596 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:50.168556 2407 ts_manager.cc:194] Registered new tserver with Master: 9263ef5a135540069e51faf8549ab82c (127.2.78.1:37065)
I20250623 14:05:50.171545 2407 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:56479
W20250623 14:05:50.453617 2600 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:50.454087 2600 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:05:50.454526 2600 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:50.485152 2600 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:05:50.485950 2600 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:05:50.520738 2600 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34013
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:05:50.522011 2600 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:05:50.523522 2600 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:05:50.539017 2607 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:51.175369 2596 heartbeater.cc:499] Master 127.2.78.62:45007 was elected leader, sending a full tablet report...
W20250623 14:05:50.539312 2609 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:50.540120 2606 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:51.676635 2608 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:05:51.676781 2600 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:05:51.680337 2600 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:05:51.682451 2600 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:05:51.683784 2600 hybrid_clock.cc:648] HybridClock initialized: now 1750687551683750 us; error 54 us; skew 500 ppm
I20250623 14:05:51.684537 2600 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:05:51.691075 2600 webserver.cc:469] Webserver started at http://127.2.78.2:46747/ using document root <none> and password file <none>
I20250623 14:05:51.691933 2600 fs_manager.cc:362] Metadata directory not provided
I20250623 14:05:51.692117 2600 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:05:51.692525 2600 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:05:51.696983 2600 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "c33c0cbf8df04575b3325f409bde473b"
format_stamp: "Formatted at 2025-06-23 14:05:51 on dist-test-slave-stbh"
I20250623 14:05:51.698127 2600 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "c33c0cbf8df04575b3325f409bde473b"
format_stamp: "Formatted at 2025-06-23 14:05:51 on dist-test-slave-stbh"
I20250623 14:05:51.705117 2600 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250623 14:05:51.711714 2616 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:51.712708 2600 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250623 14:05:51.713047 2600 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "c33c0cbf8df04575b3325f409bde473b"
format_stamp: "Formatted at 2025-06-23 14:05:51 on dist-test-slave-stbh"
I20250623 14:05:51.713380 2600 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:05:51.773363 2600 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:05:51.774888 2600 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:05:51.775328 2600 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:05:51.777783 2600 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:05:51.781774 2600 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:05:51.782003 2600 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250623 14:05:51.782239 2600 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:05:51.782398 2600 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:51.914299 2600 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:38737
I20250623 14:05:51.914399 2729 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:38737 every 8 connection(s)
I20250623 14:05:51.917059 2600 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:05:51.919114 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 2600
I20250623 14:05:51.919756 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250623 14:05:51.927405 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--builtin_ntp_servers=127.2.78.20:34013
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:05:51.938969 2730 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:51.939388 2730 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:51.940371 2730 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:51.942505 2407 ts_manager.cc:194] Registered new tserver with Master: c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737)
I20250623 14:05:51.943791 2407 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:59633
W20250623 14:05:52.226143 2734 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:52.226684 2734 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:05:52.227185 2734 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:52.259083 2734 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:05:52.260064 2734 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:05:52.295806 2734 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34013
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:05:52.297266 2734 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:05:52.298928 2734 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:05:52.314674 2741 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:52.946810 2730 heartbeater.cc:499] Master 127.2.78.62:45007 was elected leader, sending a full tablet report...
W20250623 14:05:52.315150 2743 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:52.316320 2734 server_base.cc:1048] running on GCE node
W20250623 14:05:52.317862 2740 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:53.480324 2734 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:05:53.482575 2734 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:05:53.483906 2734 hybrid_clock.cc:648] HybridClock initialized: now 1750687553483866 us; error 60 us; skew 500 ppm
I20250623 14:05:53.484745 2734 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:05:53.491014 2734 webserver.cc:469] Webserver started at http://127.2.78.3:37605/ using document root <none> and password file <none>
I20250623 14:05:53.491932 2734 fs_manager.cc:362] Metadata directory not provided
I20250623 14:05:53.492149 2734 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:05:53.492600 2734 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:05:53.496907 2734 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "988ce200035b4f73845840c70c47d7ad"
format_stamp: "Formatted at 2025-06-23 14:05:53 on dist-test-slave-stbh"
I20250623 14:05:53.498441 2734 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "988ce200035b4f73845840c70c47d7ad"
format_stamp: "Formatted at 2025-06-23 14:05:53 on dist-test-slave-stbh"
I20250623 14:05:53.505522 2734 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250623 14:05:53.511554 2750 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:53.512524 2734 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250623 14:05:53.512861 2734 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "988ce200035b4f73845840c70c47d7ad"
format_stamp: "Formatted at 2025-06-23 14:05:53 on dist-test-slave-stbh"
I20250623 14:05:53.513206 2734 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:05:53.564781 2734 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:05:53.566303 2734 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:05:53.566735 2734 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:05:53.569321 2734 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:05:53.573319 2734 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:05:53.573523 2734 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:53.573781 2734 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:05:53.573956 2734 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.001s
I20250623 14:05:53.710301 2734 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:41139
I20250623 14:05:53.710405 2863 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:41139 every 8 connection(s)
I20250623 14:05:53.712831 2734 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:05:53.722597 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 2734
I20250623 14:05:53.723083 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250623 14:05:53.730041 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.4:0
--local_ip_for_outbound_sockets=127.2.78.4
--webserver_interface=127.2.78.4
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--builtin_ntp_servers=127.2.78.20:34013
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:05:53.737601 2864 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:53.738164 2864 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:53.739508 2864 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:53.742273 2407 ts_manager.cc:194] Registered new tserver with Master: 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
I20250623 14:05:53.743494 2407 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:42349
W20250623 14:05:54.031476 2868 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:54.032045 2868 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:05:54.032574 2868 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:54.063772 2868 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:05:54.064663 2868 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.4
I20250623 14:05:54.108660 2868 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34013
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.4:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.2.78.4
--webserver_port=0
--tserver_master_addrs=127.2.78.62:45007
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.4
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:05:54.110344 2868 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:05:54.112685 2868 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:05:54.136056 2875 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:54.747262 2864 heartbeater.cc:499] Master 127.2.78.62:45007 was elected leader, sending a full tablet report...
W20250623 14:05:54.143514 2877 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:54.142855 2874 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:54.143612 2868 server_base.cc:1048] running on GCE node
I20250623 14:05:55.300213 2868 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:05:55.302528 2868 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:05:55.303922 2868 hybrid_clock.cc:648] HybridClock initialized: now 1750687555303886 us; error 64 us; skew 500 ppm
I20250623 14:05:55.304718 2868 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:05:55.314669 2868 webserver.cc:469] Webserver started at http://127.2.78.4:45727/ using document root <none> and password file <none>
I20250623 14:05:55.315641 2868 fs_manager.cc:362] Metadata directory not provided
I20250623 14:05:55.315860 2868 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:05:55.316310 2868 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:05:55.320725 2868 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "648df76281f84c7dac14b03abf0373f7"
format_stamp: "Formatted at 2025-06-23 14:05:55 on dist-test-slave-stbh"
I20250623 14:05:55.321969 2868 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "648df76281f84c7dac14b03abf0373f7"
format_stamp: "Formatted at 2025-06-23 14:05:55 on dist-test-slave-stbh"
I20250623 14:05:55.329190 2868 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250623 14:05:55.334853 2885 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:55.335820 2868 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250623 14:05:55.336140 2868 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "648df76281f84c7dac14b03abf0373f7"
format_stamp: "Formatted at 2025-06-23 14:05:55 on dist-test-slave-stbh"
I20250623 14:05:55.336496 2868 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:05:55.381323 2868 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:05:55.382828 2868 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:05:55.383267 2868 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:05:55.385682 2868 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:05:55.389731 2868 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:05:55.389978 2868 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250623 14:05:55.390216 2868 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:05:55.390379 2868 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:55.523232 2868 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.4:32867
I20250623 14:05:55.523315 2999 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.4:32867 every 8 connection(s)
I20250623 14:05:55.525822 2868 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250623 14:05:55.526774 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 2868
I20250623 14:05:55.527413 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250623 14:05:55.568966 3000 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:55.569494 3000 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:55.570564 3000 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:55.572822 2407 ts_manager.cc:194] Registered new tserver with Master: 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867)
I20250623 14:05:55.574640 2407 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.4:40621
I20250623 14:05:55.580258 2360 external_mini_cluster.cc:934] 4 TS(s) registered with all masters
I20250623 14:05:55.632071 2407 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53172:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250623 14:05:55.703060 2933 tablet_service.cc:1468] Processing CreateTablet for tablet fea86fd2877b486aa03754ce163122a1 (DEFAULT_TABLE table=TestTable [id=99aba3e700ee49ac92d0877718efff97]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:05:55.705845 2933 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fea86fd2877b486aa03754ce163122a1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:05:55.708165 2665 tablet_service.cc:1468] Processing CreateTablet for tablet fea86fd2877b486aa03754ce163122a1 (DEFAULT_TABLE table=TestTable [id=99aba3e700ee49ac92d0877718efff97]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:05:55.710068 2665 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fea86fd2877b486aa03754ce163122a1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:05:55.720089 2798 tablet_service.cc:1468] Processing CreateTablet for tablet fea86fd2877b486aa03754ce163122a1 (DEFAULT_TABLE table=TestTable [id=99aba3e700ee49ac92d0877718efff97]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:05:55.723716 2798 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fea86fd2877b486aa03754ce163122a1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:05:55.743073 3019 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: Bootstrap starting.
I20250623 14:05:55.751210 3020 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: Bootstrap starting.
I20250623 14:05:55.753437 3019 tablet_bootstrap.cc:654] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: Neither blocks nor log segments found. Creating new log.
I20250623 14:05:55.756335 3019 log.cc:826] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: Log is configured to *not* fsync() on all Append() calls
I20250623 14:05:55.756628 3021 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Bootstrap starting.
I20250623 14:05:55.760185 3020 tablet_bootstrap.cc:654] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: Neither blocks nor log segments found. Creating new log.
I20250623 14:05:55.762753 3020 log.cc:826] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: Log is configured to *not* fsync() on all Append() calls
I20250623 14:05:55.766539 3021 tablet_bootstrap.cc:654] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Neither blocks nor log segments found. Creating new log.
I20250623 14:05:55.769181 3021 log.cc:826] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Log is configured to *not* fsync() on all Append() calls
I20250623 14:05:55.769605 3020 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: No bootstrap required, opened a new log
I20250623 14:05:55.769814 3019 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: No bootstrap required, opened a new log
I20250623 14:05:55.770145 3020 ts_tablet_manager.cc:1397] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: Time spent bootstrapping tablet: real 0.020s user 0.007s sys 0.010s
I20250623 14:05:55.770319 3019 ts_tablet_manager.cc:1397] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: Time spent bootstrapping tablet: real 0.028s user 0.017s sys 0.004s
I20250623 14:05:55.780731 3021 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: No bootstrap required, opened a new log
I20250623 14:05:55.781316 3021 ts_tablet_manager.cc:1397] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Time spent bootstrapping tablet: real 0.025s user 0.012s sys 0.008s
I20250623 14:05:55.789227 3020 raft_consensus.cc:357] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.790211 3020 raft_consensus.cc:383] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:05:55.790515 3020 raft_consensus.cc:738] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c33c0cbf8df04575b3325f409bde473b, State: Initialized, Role: FOLLOWER
I20250623 14:05:55.791405 3020 consensus_queue.cc:260] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.795639 3020 ts_tablet_manager.cc:1428] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: Time spent starting tablet: real 0.025s user 0.019s sys 0.005s
I20250623 14:05:55.798434 3019 raft_consensus.cc:357] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.799351 3019 raft_consensus.cc:383] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:05:55.799652 3019 raft_consensus.cc:738] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 648df76281f84c7dac14b03abf0373f7, State: Initialized, Role: FOLLOWER
I20250623 14:05:55.800530 3019 consensus_queue.cc:260] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.803709 3000 heartbeater.cc:499] Master 127.2.78.62:45007 was elected leader, sending a full tablet report...
I20250623 14:05:55.805186 3019 ts_tablet_manager.cc:1428] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: Time spent starting tablet: real 0.035s user 0.033s sys 0.000s
I20250623 14:05:55.809219 3021 raft_consensus.cc:357] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.810204 3021 raft_consensus.cc:383] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:05:55.810511 3021 raft_consensus.cc:738] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 988ce200035b4f73845840c70c47d7ad, State: Initialized, Role: FOLLOWER
I20250623 14:05:55.811340 3021 consensus_queue.cc:260] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.815302 3021 ts_tablet_manager.cc:1428] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Time spent starting tablet: real 0.034s user 0.027s sys 0.009s
W20250623 14:05:55.926189 2731 tablet.cc:2378] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:05:55.960263 3025 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:05:55.960723 3025 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.963325 3025 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867), 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
W20250623 14:05:55.972785 2865 tablet.cc:2378] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:05:55.974505 2954 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "c33c0cbf8df04575b3325f409bde473b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "648df76281f84c7dac14b03abf0373f7" is_pre_election: true
I20250623 14:05:55.975070 2818 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "c33c0cbf8df04575b3325f409bde473b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "988ce200035b4f73845840c70c47d7ad" is_pre_election: true
I20250623 14:05:55.975394 2954 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c33c0cbf8df04575b3325f409bde473b in term 0.
I20250623 14:05:55.975869 2818 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c33c0cbf8df04575b3325f409bde473b in term 0.
I20250623 14:05:55.976680 2617 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 648df76281f84c7dac14b03abf0373f7, c33c0cbf8df04575b3325f409bde473b; no voters:
I20250623 14:05:55.977479 3025 raft_consensus.cc:2802] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:05:55.977838 3025 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:05:55.978091 3025 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:05:55.982275 3025 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:55.983651 3025 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [CANDIDATE]: Term 1 election: Requested vote from peers 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867), 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
I20250623 14:05:55.984524 2954 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "c33c0cbf8df04575b3325f409bde473b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "648df76281f84c7dac14b03abf0373f7"
I20250623 14:05:55.984580 2818 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "c33c0cbf8df04575b3325f409bde473b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "988ce200035b4f73845840c70c47d7ad"
I20250623 14:05:55.984931 2954 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:05:55.984975 2818 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:05:55.989253 2954 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c33c0cbf8df04575b3325f409bde473b in term 1.
I20250623 14:05:55.989275 2818 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c33c0cbf8df04575b3325f409bde473b in term 1.
I20250623 14:05:55.990149 2617 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 648df76281f84c7dac14b03abf0373f7, c33c0cbf8df04575b3325f409bde473b; no voters:
I20250623 14:05:55.990764 3025 raft_consensus.cc:2802] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:05:55.992211 3025 raft_consensus.cc:695] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [term 1 LEADER]: Becoming Leader. State: Replica: c33c0cbf8df04575b3325f409bde473b, State: Running, Role: LEADER
I20250623 14:05:55.992929 3025 consensus_queue.cc:237] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:56.004305 2406 catalog_manager.cc:5582] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b reported cstate change: term changed from 0 to 1, leader changed from <none> to c33c0cbf8df04575b3325f409bde473b (127.2.78.2). New cstate: current_term: 1 leader_uuid: "c33c0cbf8df04575b3325f409bde473b" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } health_report { overall_health: UNKNOWN } } }
W20250623 14:05:56.042424 3001 tablet.cc:2378] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:05:56.092511 2360 external_mini_cluster.cc:934] 4 TS(s) registered with all masters
I20250623 14:05:56.096230 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver c33c0cbf8df04575b3325f409bde473b to finish bootstrapping
I20250623 14:05:56.111888 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 988ce200035b4f73845840c70c47d7ad to finish bootstrapping
I20250623 14:05:56.123986 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 648df76281f84c7dac14b03abf0373f7 to finish bootstrapping
I20250623 14:05:56.135987 2360 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250623 14:05:56.139415 2360 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "fea86fd2877b486aa03754ce163122a1"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "648df76281f84c7dac14b03abf0373f7"
rpc_addresses {
host: "127.2.78.4"
port: 32867
}
}
ts_infos {
permanent_uuid: "c33c0cbf8df04575b3325f409bde473b"
rpc_addresses {
host: "127.2.78.2"
port: 38737
}
}
ts_infos {
permanent_uuid: "988ce200035b4f73845840c70c47d7ad"
rpc_addresses {
host: "127.2.78.3"
port: 41139
}
}
I20250623 14:05:56.427807 3025 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [LEADER]: Connected to new peer: Peer: permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:05:56.444298 3030 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P c33c0cbf8df04575b3325f409bde473b [LEADER]: Connected to new peer: Peer: permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250623 14:05:56.452486 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 2600
W20250623 14:05:56.479496 2753 connection.cc:537] server connection from 127.2.78.2:58929 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250623 14:05:56.482461 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 2374
I20250623 14:05:56.513720 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:45007
--webserver_interface=127.2.78.62
--webserver_port=37843
--builtin_ntp_servers=127.2.78.20:34013
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:45007 with env {}
W20250623 14:05:56.867249 3042 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:56.867880 3042 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:05:56.868358 3042 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:56.900094 3042 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:05:56.900471 3042 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:05:56.900750 3042 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:05:56.900998 3042 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:05:56.936620 3042 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34013
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:45007
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:45007
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=37843
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:05:56.938045 3042 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:05:56.939831 3042 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:05:56.955808 3049 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:05:57.201177 2596 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:45007 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:45007: connect: Connection refused (error 111)
W20250623 14:05:57.454933 3000 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:45007 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:45007: connect: Connection refused (error 111)
W20250623 14:05:57.471556 2864 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:45007 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:45007: connect: Connection refused (error 111)
I20250623 14:05:57.940775 3057 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader c33c0cbf8df04575b3325f409bde473b)
I20250623 14:05:57.941192 3057 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:57.943389 3057 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737), 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
W20250623 14:05:57.946408 2889 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111)
I20250623 14:05:57.958531 3061 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 1 FOLLOWER]: Starting pre-election (detected failure of leader c33c0cbf8df04575b3325f409bde473b)
I20250623 14:05:57.959220 3061 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
W20250623 14:05:57.960646 2889 leader_election.cc:336] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737): Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111)
I20250623 14:05:57.962991 3061 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867), c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737)
I20250623 14:05:57.970765 2818 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "648df76281f84c7dac14b03abf0373f7" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "988ce200035b4f73845840c70c47d7ad" is_pre_election: true
I20250623 14:05:57.971428 2818 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 648df76281f84c7dac14b03abf0373f7 in term 1.
W20250623 14:05:57.972415 2754 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111)
I20250623 14:05:57.972885 2888 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 648df76281f84c7dac14b03abf0373f7, 988ce200035b4f73845840c70c47d7ad; no voters: c33c0cbf8df04575b3325f409bde473b
I20250623 14:05:57.973793 3057 raft_consensus.cc:2802] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250623 14:05:57.974116 3057 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 1 FOLLOWER]: Starting leader election (detected failure of leader c33c0cbf8df04575b3325f409bde473b)
I20250623 14:05:57.974447 3057 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:05:57.981452 3057 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:05:57.983371 3057 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [CANDIDATE]: Term 2 election: Requested vote from peers c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737), 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
I20250623 14:05:57.985648 2818 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "648df76281f84c7dac14b03abf0373f7" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "988ce200035b4f73845840c70c47d7ad"
I20250623 14:05:57.986238 2818 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 1 FOLLOWER]: Advancing to term 2
W20250623 14:05:57.987785 2754 leader_election.cc:336] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737): Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111)
I20250623 14:05:57.988742 2954 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "988ce200035b4f73845840c70c47d7ad" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "648df76281f84c7dac14b03abf0373f7" is_pre_election: true
W20250623 14:05:57.988543 2889 leader_election.cc:336] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737): Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111)
I20250623 14:05:57.989552 2954 raft_consensus.cc:2391] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 988ce200035b4f73845840c70c47d7ad in current term 2: Already voted for candidate 648df76281f84c7dac14b03abf0373f7 in this term.
I20250623 14:05:57.990904 2751 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 988ce200035b4f73845840c70c47d7ad; no voters: 648df76281f84c7dac14b03abf0373f7, c33c0cbf8df04575b3325f409bde473b
I20250623 14:05:57.991986 2818 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 648df76281f84c7dac14b03abf0373f7 in term 2.
I20250623 14:05:57.992331 3061 raft_consensus.cc:2747] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250623 14:05:57.993253 2888 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 648df76281f84c7dac14b03abf0373f7, 988ce200035b4f73845840c70c47d7ad; no voters: c33c0cbf8df04575b3325f409bde473b
I20250623 14:05:57.993840 3057 raft_consensus.cc:2802] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:05:57.995388 3057 raft_consensus.cc:695] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 LEADER]: Becoming Leader. State: Replica: 648df76281f84c7dac14b03abf0373f7, State: Running, Role: LEADER
I20250623 14:05:57.996227 3057 consensus_queue.cc:237] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
W20250623 14:05:56.955873 3051 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:56.957444 3042 server_base.cc:1048] running on GCE node
W20250623 14:05:56.955996 3048 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:05:58.172340 3042 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:05:58.175030 3042 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:05:58.176405 3042 hybrid_clock.cc:648] HybridClock initialized: now 1750687558176367 us; error 50 us; skew 500 ppm
I20250623 14:05:58.177239 3042 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:05:58.183724 3042 webserver.cc:469] Webserver started at http://127.2.78.62:37843/ using document root <none> and password file <none>
I20250623 14:05:58.184731 3042 fs_manager.cc:362] Metadata directory not provided
I20250623 14:05:58.184942 3042 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:05:58.193212 3042 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.000s
I20250623 14:05:58.197867 3071 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:05:58.198997 3042 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250623 14:05:58.199366 3042 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "824bbcb72f3a4646978d08b8ffb24673"
format_stamp: "Formatted at 2025-06-23 14:05:47 on dist-test-slave-stbh"
I20250623 14:05:58.201508 3042 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:05:58.258952 3042 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:05:58.260457 3042 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:05:58.260896 3042 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:05:58.331745 3042 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:45007
I20250623 14:05:58.331830 3124 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:45007 every 8 connection(s)
I20250623 14:05:58.334553 3042 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:05:58.338143 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3042
I20250623 14:05:58.338698 2360 kudu-admin-test.cc:735] Forcing unsafe config change on tserver 988ce200035b4f73845840c70c47d7ad
I20250623 14:05:58.345516 3125 sys_catalog.cc:263] Verifying existing consensus state
I20250623 14:05:58.350517 3125 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Bootstrap starting.
I20250623 14:05:58.389006 3125 log.cc:826] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Log is configured to *not* fsync() on all Append() calls
W20250623 14:05:58.407053 2889 consensus_peers.cc:487] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 -> Peer c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737): Couldn't send request to peer c33c0cbf8df04575b3325f409bde473b. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250623 14:05:58.413354 3125 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:05:58.414232 3125 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Bootstrap complete.
I20250623 14:05:58.435936 2818 raft_consensus.cc:1273] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 2 FOLLOWER]: Refusing update from remote peer 648df76281f84c7dac14b03abf0373f7: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250623 14:05:58.437422 3057 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [LEADER]: Connected to new peer: Peer: permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250623 14:05:58.435647 3125 raft_consensus.cc:357] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:58.438428 3125 raft_consensus.cc:738] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 824bbcb72f3a4646978d08b8ffb24673, State: Initialized, Role: FOLLOWER
I20250623 14:05:58.439229 3125 consensus_queue.cc:260] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:58.439815 3125 raft_consensus.cc:397] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:05:58.440150 3125 raft_consensus.cc:491] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:05:58.440579 3125 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:05:58.452950 3125 raft_consensus.cc:513] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:58.454208 3125 leader_election.cc:304] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 824bbcb72f3a4646978d08b8ffb24673; no voters:
I20250623 14:05:58.463742 3125 leader_election.cc:290] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250623 14:05:58.464433 3131 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:05:58.476001 3125 sys_catalog.cc:564] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:05:58.483315 3131 raft_consensus.cc:695] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [term 2 LEADER]: Becoming Leader. State: Replica: 824bbcb72f3a4646978d08b8ffb24673, State: Running, Role: LEADER
I20250623 14:05:58.484504 3131 consensus_queue.cc:237] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } }
I20250623 14:05:58.493368 2864 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:58.495326 3000 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:58.516032 3134 sys_catalog.cc:455] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "824bbcb72f3a4646978d08b8ffb24673" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } } }
I20250623 14:05:58.516814 3134 sys_catalog.cc:458] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: This master's current role is: LEADER
I20250623 14:05:58.518944 3136 sys_catalog.cc:455] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 824bbcb72f3a4646978d08b8ffb24673. Latest consensus state: current_term: 2 leader_uuid: "824bbcb72f3a4646978d08b8ffb24673" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "824bbcb72f3a4646978d08b8ffb24673" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 45007 } } }
I20250623 14:05:58.525846 3136 sys_catalog.cc:458] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673 [sys.catalog]: This master's current role is: LEADER
I20250623 14:05:58.543357 3145 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:05:58.570144 3145 catalog_manager.cc:671] Loaded metadata for table TestTable [id=99aba3e700ee49ac92d0877718efff97]
I20250623 14:05:58.586931 3145 tablet_loader.cc:96] loaded metadata for tablet fea86fd2877b486aa03754ce163122a1 (table TestTable [id=99aba3e700ee49ac92d0877718efff97])
I20250623 14:05:58.588364 3145 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:05:58.593071 3145 catalog_manager.cc:1261] Loaded cluster ID: 5b17e5f5716e4dfdb25d0c63c754d132
I20250623 14:05:58.593467 3145 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:05:58.602007 3145 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:05:58.607318 3145 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 824bbcb72f3a4646978d08b8ffb24673: Loaded TSK: 0
I20250623 14:05:58.608985 3145 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250623 14:05:58.727808 3127 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:05:58.728600 3127 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:05:58.772619 3127 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250623 14:05:59.257777 2596 heartbeater.cc:344] Connected to a master server at 127.2.78.62:45007
I20250623 14:05:59.265923 3090 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "9263ef5a135540069e51faf8549ab82c" instance_seqno: 1750687550108135) as {username='slave'} at 127.2.78.1:40587; Asking this server to re-register.
I20250623 14:05:59.267975 2596 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:59.268954 2596 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:59.273874 3089 ts_manager.cc:194] Registered new tserver with Master: 9263ef5a135540069e51faf8549ab82c (127.2.78.1:37065)
I20250623 14:05:59.512616 3090 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "988ce200035b4f73845840c70c47d7ad" instance_seqno: 1750687553679045) as {username='slave'} at 127.2.78.3:56641; Asking this server to re-register.
I20250623 14:05:59.516407 2864 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:59.517271 2864 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:59.519098 3090 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "648df76281f84c7dac14b03abf0373f7" instance_seqno: 1750687555492947) as {username='slave'} at 127.2.78.4:49885; Asking this server to re-register.
I20250623 14:05:59.528317 3000 heartbeater.cc:461] Registering TS with master...
I20250623 14:05:59.529217 3000 heartbeater.cc:507] Master 127.2.78.62:45007 requested a full tablet report, sending...
I20250623 14:05:59.533802 3089 ts_manager.cc:194] Registered new tserver with Master: 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867)
I20250623 14:05:59.541312 3088 ts_manager.cc:194] Registered new tserver with Master: 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
I20250623 14:05:59.546382 3089 catalog_manager.cc:5582] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 reported cstate change: term changed from 1 to 2, leader changed from c33c0cbf8df04575b3325f409bde473b (127.2.78.2) to 648df76281f84c7dac14b03abf0373f7 (127.2.78.4). New cstate: current_term: 2 leader_uuid: "648df76281f84c7dac14b03abf0373f7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } health_report { overall_health: HEALTHY } } }
W20250623 14:06:00.221197 3127 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.404s user 0.542s sys 0.854s
W20250623 14:06:00.221628 3127 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.404s user 0.542s sys 0.854s
I20250623 14:06:00.285581 2817 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "988ce200035b4f73845840c70c47d7ad"
tablet_id: "fea86fd2877b486aa03754ce163122a1"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "648df76281f84c7dac14b03abf0373f7"
}
peers {
permanent_uuid: "988ce200035b4f73845840c70c47d7ad"
}
}
from {username='slave'} at 127.0.0.1:36122
W20250623 14:06:00.287025 2817 raft_consensus.cc:2216] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 2 FOLLOWER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true
I20250623 14:06:00.288163 2817 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 2 FOLLOWER]: Advancing to term 3
W20250623 14:06:00.553614 2889 consensus_peers.cc:487] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 -> Peer c33c0cbf8df04575b3325f409bde473b (127.2.78.2:38737): Couldn't send request to peer c33c0cbf8df04575b3325f409bde473b. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.2:38737: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250623 14:06:00.581728 2817 raft_consensus.cc:1238] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 3 FOLLOWER]: Rejecting Update request from peer 648df76281f84c7dac14b03abf0373f7 for earlier term 2. Current term is 3. Ops: []
I20250623 14:06:00.583091 3057 consensus_queue.cc:1046] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [LEADER]: Peer responded invalid term: Peer: permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 }, Status: INVALID_TERM, Last received: 2.2, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250623 14:06:00.584569 3178 raft_consensus.cc:3053] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 LEADER]: Stepping down as leader of term 2
I20250623 14:06:00.584851 3178 raft_consensus.cc:738] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 LEADER]: Becoming Follower/Learner. State: Replica: 648df76281f84c7dac14b03abf0373f7, State: Running, Role: LEADER
I20250623 14:06:00.585621 3178 consensus_queue.cc:260] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } }
I20250623 14:06:00.586584 3178 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 2 FOLLOWER]: Advancing to term 3
I20250623 14:06:01.797153 3181 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250623 14:06:01.797547 3181 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true
I20250623 14:06:01.798636 3181 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867)
I20250623 14:06:01.799603 2954 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "988ce200035b4f73845840c70c47d7ad" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "648df76281f84c7dac14b03abf0373f7" is_pre_election: true
I20250623 14:06:01.800084 2954 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 988ce200035b4f73845840c70c47d7ad in term 3.
I20250623 14:06:01.801005 2751 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 648df76281f84c7dac14b03abf0373f7, 988ce200035b4f73845840c70c47d7ad; no voters:
I20250623 14:06:01.801565 3181 raft_consensus.cc:2802] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250623 14:06:01.801856 3181 raft_consensus.cc:491] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250623 14:06:01.802107 3181 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 3 FOLLOWER]: Advancing to term 4
I20250623 14:06:01.806277 3181 raft_consensus.cc:513] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true
I20250623 14:06:01.807232 3181 leader_election.cc:290] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 4 election: Requested vote from peers 648df76281f84c7dac14b03abf0373f7 (127.2.78.4:32867)
I20250623 14:06:01.808140 2954 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "fea86fd2877b486aa03754ce163122a1" candidate_uuid: "988ce200035b4f73845840c70c47d7ad" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "648df76281f84c7dac14b03abf0373f7"
I20250623 14:06:01.808554 2954 raft_consensus.cc:3058] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 3 FOLLOWER]: Advancing to term 4
I20250623 14:06:01.812772 2954 raft_consensus.cc:2466] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 988ce200035b4f73845840c70c47d7ad in term 4.
I20250623 14:06:01.813516 2751 leader_election.cc:304] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 648df76281f84c7dac14b03abf0373f7, 988ce200035b4f73845840c70c47d7ad; no voters:
I20250623 14:06:01.814111 3181 raft_consensus.cc:2802] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 4 FOLLOWER]: Leader election won for term 4
I20250623 14:06:01.814908 3181 raft_consensus.cc:695] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 4 LEADER]: Becoming Leader. State: Replica: 988ce200035b4f73845840c70c47d7ad, State: Running, Role: LEADER
I20250623 14:06:01.815558 3181 consensus_queue.cc:237] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true
I20250623 14:06:01.821048 3089 catalog_manager.cc:5582] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad reported cstate change: term changed from 2 to 4, leader changed from 648df76281f84c7dac14b03abf0373f7 (127.2.78.4) to 988ce200035b4f73845840c70c47d7ad (127.2.78.3), now has a pending config: VOTER 648df76281f84c7dac14b03abf0373f7 (127.2.78.4), VOTER 988ce200035b4f73845840c70c47d7ad (127.2.78.3). New cstate: current_term: 4 leader_uuid: "988ce200035b4f73845840c70c47d7ad" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "c33c0cbf8df04575b3325f409bde473b" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 38737 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } health_report { overall_health: HEALTHY } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true }
I20250623 14:06:02.268496 2954 raft_consensus.cc:1273] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Refusing update from remote peer 988ce200035b4f73845840c70c47d7ad: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250623 14:06:02.269887 3181 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Connected to new peer: Peer: permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.001s
I20250623 14:06:02.277776 3182 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER c33c0cbf8df04575b3325f409bde473b (127.2.78.2) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true }
I20250623 14:06:02.278708 2954 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER c33c0cbf8df04575b3325f409bde473b (127.2.78.2) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true }
I20250623 14:06:02.290753 3089 catalog_manager.cc:5582] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 reported cstate change: config changed from index -1 to 3, VOTER c33c0cbf8df04575b3325f409bde473b (127.2.78.2) evicted, no longer has a pending config: VOTER 648df76281f84c7dac14b03abf0373f7 (127.2.78.4), VOTER 988ce200035b4f73845840c70c47d7ad (127.2.78.3). New cstate: current_term: 4 leader_uuid: "988ce200035b4f73845840c70c47d7ad" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } unsafe_config_change: true }
W20250623 14:06:02.297137 3089 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet fea86fd2877b486aa03754ce163122a1 on TS c33c0cbf8df04575b3325f409bde473b: Not found: failed to reset TS proxy: Could not find TS for UUID c33c0cbf8df04575b3325f409bde473b
I20250623 14:06:02.313088 2817 consensus_queue.cc:237] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: true } } unsafe_config_change: true
I20250623 14:06:02.317601 2954 raft_consensus.cc:1273] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Refusing update from remote peer 988ce200035b4f73845840c70c47d7ad: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250623 14:06:02.318965 3181 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Connected to new peer: Peer: permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250623 14:06:02.325129 3182 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 9263ef5a135540069e51faf8549ab82c (127.2.78.1) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: true } } unsafe_config_change: true }
I20250623 14:06:02.326749 2954 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 9263ef5a135540069e51faf8549ab82c (127.2.78.1) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: true } } unsafe_config_change: true }
W20250623 14:06:02.329548 2752 consensus_peers.cc:487] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad -> Peer 9263ef5a135540069e51faf8549ab82c (127.2.78.1:37065): Couldn't send request to peer 9263ef5a135540069e51faf8549ab82c. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: fea86fd2877b486aa03754ce163122a1. This is attempt 1: this message will repeat every 5th retry.
I20250623 14:06:02.333426 3075 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet fea86fd2877b486aa03754ce163122a1 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250623 14:06:02.336489 3089 catalog_manager.cc:5582] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad reported cstate change: config changed from index 3 to 5, NON_VOTER 9263ef5a135540069e51faf8549ab82c (127.2.78.1) added. New cstate: current_term: 4 leader_uuid: "988ce200035b4f73845840c70c47d7ad" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: true } health_report { overall_health: UNKNOWN } } unsafe_config_change: true }
W20250623 14:06:02.350358 3074 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet fea86fd2877b486aa03754ce163122a1 on TS c33c0cbf8df04575b3325f409bde473b failed: Not found: failed to reset TS proxy: Could not find TS for UUID c33c0cbf8df04575b3325f409bde473b
I20250623 14:06:02.819065 3197 ts_tablet_manager.cc:927] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Initiating tablet copy from peer 988ce200035b4f73845840c70c47d7ad (127.2.78.3:41139)
I20250623 14:06:02.821542 3197 tablet_copy_client.cc:323] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: tablet copy: Beginning tablet copy session from remote peer at address 127.2.78.3:41139
I20250623 14:06:02.832604 2838 tablet_copy_service.cc:140] P 988ce200035b4f73845840c70c47d7ad: Received BeginTabletCopySession request for tablet fea86fd2877b486aa03754ce163122a1 from peer 9263ef5a135540069e51faf8549ab82c ({username='slave'} at 127.2.78.1:33565)
I20250623 14:06:02.833060 2838 tablet_copy_service.cc:161] P 988ce200035b4f73845840c70c47d7ad: Beginning new tablet copy session on tablet fea86fd2877b486aa03754ce163122a1 from peer 9263ef5a135540069e51faf8549ab82c at {username='slave'} at 127.2.78.1:33565: session id = 9263ef5a135540069e51faf8549ab82c-fea86fd2877b486aa03754ce163122a1
I20250623 14:06:02.838321 2838 tablet_copy_source_session.cc:215] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: Tablet Copy: opened 0 blocks and 1 log segments
I20250623 14:06:02.843150 3197 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fea86fd2877b486aa03754ce163122a1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:02.861721 3197 tablet_copy_client.cc:806] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: tablet copy: Starting download of 0 data blocks...
I20250623 14:06:02.862329 3197 tablet_copy_client.cc:670] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: tablet copy: Starting download of 1 WAL segments...
I20250623 14:06:02.865618 3197 tablet_copy_client.cc:538] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250623 14:06:02.871619 3197 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Bootstrap starting.
I20250623 14:06:02.883147 3197 log.cc:826] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:02.893425 3197 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:06:02.894170 3197 tablet_bootstrap.cc:492] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Bootstrap complete.
I20250623 14:06:02.894726 3197 ts_tablet_manager.cc:1397] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Time spent bootstrapping tablet: real 0.023s user 0.021s sys 0.002s
I20250623 14:06:02.911613 3197 raft_consensus.cc:357] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: true } } unsafe_config_change: true
I20250623 14:06:02.912498 3197 raft_consensus.cc:738] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: 9263ef5a135540069e51faf8549ab82c, State: Initialized, Role: LEARNER
I20250623 14:06:02.913174 3197 consensus_queue.cc:260] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: true } } unsafe_config_change: true
I20250623 14:06:02.916671 3197 ts_tablet_manager.cc:1428] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c: Time spent starting tablet: real 0.022s user 0.021s sys 0.000s
I20250623 14:06:02.918680 2838 tablet_copy_service.cc:342] P 988ce200035b4f73845840c70c47d7ad: Request end of tablet copy session 9263ef5a135540069e51faf8549ab82c-fea86fd2877b486aa03754ce163122a1 received from {username='slave'} at 127.2.78.1:33565
I20250623 14:06:02.919070 2838 tablet_copy_service.cc:434] P 988ce200035b4f73845840c70c47d7ad: ending tablet copy session 9263ef5a135540069e51faf8549ab82c-fea86fd2877b486aa03754ce163122a1 on tablet fea86fd2877b486aa03754ce163122a1 with peer 9263ef5a135540069e51faf8549ab82c
I20250623 14:06:03.405997 2551 raft_consensus.cc:1215] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250623 14:06:03.517221 3074 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet fea86fd2877b486aa03754ce163122a1 on TS c33c0cbf8df04575b3325f409bde473b failed: Not found: failed to reset TS proxy: Could not find TS for UUID c33c0cbf8df04575b3325f409bde473b
I20250623 14:06:04.005457 3204 raft_consensus.cc:1062] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad: attempting to promote NON_VOTER 9263ef5a135540069e51faf8549ab82c to VOTER
I20250623 14:06:04.007454 3204 consensus_queue.cc:237] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: false } } unsafe_config_change: true
I20250623 14:06:04.013851 2551 raft_consensus.cc:1273] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c [term 4 LEARNER]: Refusing update from remote peer 988ce200035b4f73845840c70c47d7ad: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250623 14:06:04.015789 3205 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Connected to new peer: Peer: permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.001s
I20250623 14:06:04.021993 2954 raft_consensus.cc:1273] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Refusing update from remote peer 988ce200035b4f73845840c70c47d7ad: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250623 14:06:04.023597 3205 consensus_queue.cc:1035] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [LEADER]: Connected to new peer: Peer: permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250623 14:06:04.024813 3208 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 9263ef5a135540069e51faf8549ab82c (127.2.78.1) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: false } } unsafe_config_change: true }
I20250623 14:06:04.026240 2551 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 9263ef5a135540069e51faf8549ab82c [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 9263ef5a135540069e51faf8549ab82c (127.2.78.1) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: false } } unsafe_config_change: true }
I20250623 14:06:04.029208 2954 raft_consensus.cc:2953] T fea86fd2877b486aa03754ce163122a1 P 648df76281f84c7dac14b03abf0373f7 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 9263ef5a135540069e51faf8549ab82c (127.2.78.1) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: false } } unsafe_config_change: true }
I20250623 14:06:04.036212 3088 catalog_manager.cc:5582] T fea86fd2877b486aa03754ce163122a1 P 988ce200035b4f73845840c70c47d7ad reported cstate change: config changed from index 5 to 6, 9263ef5a135540069e51faf8549ab82c (127.2.78.1) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "988ce200035b4f73845840c70c47d7ad" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "648df76281f84c7dac14b03abf0373f7" member_type: VOTER last_known_addr { host: "127.2.78.4" port: 32867 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "988ce200035b4f73845840c70c47d7ad" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 41139 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9263ef5a135540069e51faf8549ab82c" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 37065 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250623 14:06:04.094309 2360 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250623 14:06:04.110715 2360 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "fea86fd2877b486aa03754ce163122a1"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "648df76281f84c7dac14b03abf0373f7"
rpc_addresses {
host: "127.2.78.4"
port: 32867
}
}
ts_infos {
permanent_uuid: "988ce200035b4f73845840c70c47d7ad"
rpc_addresses {
host: "127.2.78.3"
port: 41139
}
}
ts_infos {
permanent_uuid: "9263ef5a135540069e51faf8549ab82c"
rpc_addresses {
host: "127.2.78.1"
port: 37065
}
}
I20250623 14:06:04.112941 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 2466
I20250623 14:06:04.138350 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 2734
I20250623 14:06:04.178670 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 2868
I20250623 14:06:04.209529 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3042
2025-06-23T14:06:04Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (19145 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250623 14:06:04.268828 2360 test_util.cc:276] Using random seed: -1235122946
I20250623 14:06:04.274853 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:06:04.275084 2360 ts_itest-base.cc:116] --------------
I20250623 14:06:04.275251 2360 ts_itest-base.cc:117] 3 tablet servers
I20250623 14:06:04.275395 2360 ts_itest-base.cc:118] 3 replicas per TS
I20250623 14:06:04.275547 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:06:04Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:06:04Z Disabled control of system clock
I20250623 14:06:04.313398 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40553
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:36793
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:40553
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250623 14:06:04.609226 3227 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:04.609843 3227 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:04.610262 3227 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:04.641217 3227 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:06:04.641528 3227 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:04.641729 3227 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:06:04.641937 3227 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:06:04.677574 3227 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36793
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:40553
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40553
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:04.678885 3227 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:04.680511 3227 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:04.696952 3233 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:04.697319 3236 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:04.698540 3234 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:04.698879 3227 server_base.cc:1048] running on GCE node
I20250623 14:06:05.877372 3227 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:05.880089 3227 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:05.881443 3227 hybrid_clock.cc:648] HybridClock initialized: now 1750687565881397 us; error 53 us; skew 500 ppm
I20250623 14:06:05.882314 3227 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:05.888664 3227 webserver.cc:469] Webserver started at http://127.2.78.62:37095/ using document root <none> and password file <none>
I20250623 14:06:05.889639 3227 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:05.889902 3227 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:05.890363 3227 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:05.894969 3227 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "ca19472fe3d04663959e5e341472daee"
format_stamp: "Formatted at 2025-06-23 14:06:05 on dist-test-slave-stbh"
I20250623 14:06:05.896136 3227 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "ca19472fe3d04663959e5e341472daee"
format_stamp: "Formatted at 2025-06-23 14:06:05 on dist-test-slave-stbh"
I20250623 14:06:05.903777 3227 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.003s
I20250623 14:06:05.910172 3244 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:05.911314 3227 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250623 14:06:05.911689 3227 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "ca19472fe3d04663959e5e341472daee"
format_stamp: "Formatted at 2025-06-23 14:06:05 on dist-test-slave-stbh"
I20250623 14:06:05.912024 3227 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:05.963827 3227 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:05.965356 3227 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:05.965847 3227 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:06.036132 3227 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:40553
I20250623 14:06:06.036209 3295 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:40553 every 8 connection(s)
I20250623 14:06:06.038910 3227 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:06:06.044204 3296 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:06.045820 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3227
I20250623 14:06:06.046567 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:06:06.067056 3296 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee: Bootstrap starting.
I20250623 14:06:06.073513 3296 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:06.075362 3296 log.cc:826] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:06.080211 3296 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee: No bootstrap required, opened a new log
I20250623 14:06:06.098225 3296 raft_consensus.cc:357] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ca19472fe3d04663959e5e341472daee" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40553 } }
I20250623 14:06:06.098888 3296 raft_consensus.cc:383] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:06.099090 3296 raft_consensus.cc:738] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ca19472fe3d04663959e5e341472daee, State: Initialized, Role: FOLLOWER
I20250623 14:06:06.099704 3296 consensus_queue.cc:260] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ca19472fe3d04663959e5e341472daee" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40553 } }
I20250623 14:06:06.100167 3296 raft_consensus.cc:397] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:06.100390 3296 raft_consensus.cc:491] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:06.100654 3296 raft_consensus.cc:3058] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:06.104960 3296 raft_consensus.cc:513] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ca19472fe3d04663959e5e341472daee" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40553 } }
I20250623 14:06:06.105648 3296 leader_election.cc:304] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ca19472fe3d04663959e5e341472daee; no voters:
I20250623 14:06:06.107316 3296 leader_election.cc:290] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:06.108074 3301 raft_consensus.cc:2802] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:06.110272 3301 raft_consensus.cc:695] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [term 1 LEADER]: Becoming Leader. State: Replica: ca19472fe3d04663959e5e341472daee, State: Running, Role: LEADER
I20250623 14:06:06.111052 3301 consensus_queue.cc:237] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ca19472fe3d04663959e5e341472daee" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40553 } }
I20250623 14:06:06.112068 3296 sys_catalog.cc:564] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:06:06.121666 3303 sys_catalog.cc:455] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [sys.catalog]: SysCatalogTable state changed. Reason: New leader ca19472fe3d04663959e5e341472daee. Latest consensus state: current_term: 1 leader_uuid: "ca19472fe3d04663959e5e341472daee" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ca19472fe3d04663959e5e341472daee" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40553 } } }
I20250623 14:06:06.121699 3302 sys_catalog.cc:455] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "ca19472fe3d04663959e5e341472daee" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ca19472fe3d04663959e5e341472daee" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40553 } } }
I20250623 14:06:06.122488 3303 sys_catalog.cc:458] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:06.122679 3302 sys_catalog.cc:458] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:06.125634 3309 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:06:06.138327 3309 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:06:06.155575 3309 catalog_manager.cc:1349] Generated new cluster ID: 114614fdd5284ea9adda9167bc1b2c3c
I20250623 14:06:06.155856 3309 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:06:06.178103 3309 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:06:06.179679 3309 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:06:06.198858 3309 catalog_manager.cc:5955] T 00000000000000000000000000000000 P ca19472fe3d04663959e5e341472daee: Generated new TSK 0
I20250623 14:06:06.199805 3309 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:06:06.220567 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40553
--builtin_ntp_servers=127.2.78.20:36793
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250623 14:06:06.522475 3320 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250623 14:06:06.523128 3320 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:06.523402 3320 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:06.523895 3320 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:06.555089 3320 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:06.555953 3320 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:06:06.591445 3320 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36793
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40553
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:06.592783 3320 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:06.594478 3320 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:06.611646 3326 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:06.612160 3329 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:06.613940 3320 server_base.cc:1048] running on GCE node
W20250623 14:06:06.613023 3327 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:07.789506 3320 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:07.792352 3320 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:07.793839 3320 hybrid_clock.cc:648] HybridClock initialized: now 1750687567793776 us; error 80 us; skew 500 ppm
I20250623 14:06:07.794689 3320 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:07.806617 3320 webserver.cc:469] Webserver started at http://127.2.78.1:35099/ using document root <none> and password file <none>
I20250623 14:06:07.807567 3320 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:07.807778 3320 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:07.808244 3320 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:07.812755 3320 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "560f4f603868445d812a943bff92feec"
format_stamp: "Formatted at 2025-06-23 14:06:07 on dist-test-slave-stbh"
I20250623 14:06:07.814040 3320 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "560f4f603868445d812a943bff92feec"
format_stamp: "Formatted at 2025-06-23 14:06:07 on dist-test-slave-stbh"
I20250623 14:06:07.821491 3320 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250623 14:06:07.827538 3336 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:07.828709 3320 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250623 14:06:07.829018 3320 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "560f4f603868445d812a943bff92feec"
format_stamp: "Formatted at 2025-06-23 14:06:07 on dist-test-slave-stbh"
I20250623 14:06:07.829339 3320 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:07.882068 3320 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:07.883620 3320 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:07.884078 3320 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:07.886700 3320 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:07.890878 3320 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:07.891134 3320 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:07.891357 3320 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:07.891506 3320 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:08.032737 3320 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:40231
I20250623 14:06:08.032859 3449 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:40231 every 8 connection(s)
I20250623 14:06:08.036298 3320 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:06:08.044420 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3320
I20250623 14:06:08.044857 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:06:08.050611 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40553
--builtin_ntp_servers=127.2.78.20:36793
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250623 14:06:08.061198 3450 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40553
I20250623 14:06:08.061648 3450 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:08.062717 3450 heartbeater.cc:507] Master 127.2.78.62:40553 requested a full tablet report, sending...
I20250623 14:06:08.065312 3261 ts_manager.cc:194] Registered new tserver with Master: 560f4f603868445d812a943bff92feec (127.2.78.1:40231)
I20250623 14:06:08.067337 3261 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:34027
W20250623 14:06:08.355171 3454 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250623 14:06:08.355813 3454 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:08.356078 3454 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:08.356557 3454 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:08.388562 3454 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:08.389389 3454 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:06:08.424404 3454 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36793
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40553
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:08.425781 3454 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:08.427486 3454 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:08.443759 3461 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:09.070856 3450 heartbeater.cc:499] Master 127.2.78.62:40553 was elected leader, sending a full tablet report...
W20250623 14:06:08.443751 3463 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:08.445230 3454 server_base.cc:1048] running on GCE node
W20250623 14:06:08.445269 3460 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:09.601843 3454 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:09.604555 3454 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:09.605981 3454 hybrid_clock.cc:648] HybridClock initialized: now 1750687569605928 us; error 73 us; skew 500 ppm
I20250623 14:06:09.606746 3454 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:09.613363 3454 webserver.cc:469] Webserver started at http://127.2.78.2:44043/ using document root <none> and password file <none>
I20250623 14:06:09.614285 3454 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:09.614468 3454 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:09.614898 3454 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:09.619316 3454 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "bc7735d426aa48a7be1d17e334cc0dce"
format_stamp: "Formatted at 2025-06-23 14:06:09 on dist-test-slave-stbh"
I20250623 14:06:09.620407 3454 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "bc7735d426aa48a7be1d17e334cc0dce"
format_stamp: "Formatted at 2025-06-23 14:06:09 on dist-test-slave-stbh"
I20250623 14:06:09.627295 3454 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.000s sys 0.008s
I20250623 14:06:09.632776 3470 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:09.633744 3454 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250623 14:06:09.634083 3454 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "bc7735d426aa48a7be1d17e334cc0dce"
format_stamp: "Formatted at 2025-06-23 14:06:09 on dist-test-slave-stbh"
I20250623 14:06:09.634404 3454 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:09.701462 3454 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:09.703615 3454 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:09.704209 3454 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:09.707288 3454 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:09.711246 3454 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:09.711445 3454 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:09.711699 3454 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:09.711854 3454 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:09.844990 3454 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:41619
I20250623 14:06:09.845103 3583 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:41619 every 8 connection(s)
I20250623 14:06:09.847714 3454 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:06:09.855551 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3454
I20250623 14:06:09.856041 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250623 14:06:09.862905 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40553
--builtin_ntp_servers=127.2.78.20:36793
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250623 14:06:09.870921 3584 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40553
I20250623 14:06:09.871440 3584 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:09.872491 3584 heartbeater.cc:507] Master 127.2.78.62:40553 requested a full tablet report, sending...
I20250623 14:06:09.874812 3261 ts_manager.cc:194] Registered new tserver with Master: bc7735d426aa48a7be1d17e334cc0dce (127.2.78.2:41619)
I20250623 14:06:09.876056 3261 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:50725
W20250623 14:06:10.176517 3588 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250623 14:06:10.177146 3588 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:10.177397 3588 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:10.177920 3588 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:10.208452 3588 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:10.209292 3588 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:06:10.243794 3588 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36793
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40553
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:10.245138 3588 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:10.246868 3588 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:10.263971 3594 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:10.879127 3584 heartbeater.cc:499] Master 127.2.78.62:40553 was elected leader, sending a full tablet report...
W20250623 14:06:10.264187 3595 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:10.264395 3597 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:11.427950 3596 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:06:11.428051 3588 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:06:11.432087 3588 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:11.434298 3588 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:11.435667 3588 hybrid_clock.cc:648] HybridClock initialized: now 1750687571435626 us; error 63 us; skew 500 ppm
I20250623 14:06:11.436455 3588 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:11.442673 3588 webserver.cc:469] Webserver started at http://127.2.78.3:35015/ using document root <none> and password file <none>
I20250623 14:06:11.443569 3588 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:11.443786 3588 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:11.444214 3588 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:11.448769 3588 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476"
format_stamp: "Formatted at 2025-06-23 14:06:11 on dist-test-slave-stbh"
I20250623 14:06:11.449954 3588 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476"
format_stamp: "Formatted at 2025-06-23 14:06:11 on dist-test-slave-stbh"
I20250623 14:06:11.456890 3588 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.000s
I20250623 14:06:11.462266 3604 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:11.463263 3588 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.002s
I20250623 14:06:11.463580 3588 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476"
format_stamp: "Formatted at 2025-06-23 14:06:11 on dist-test-slave-stbh"
I20250623 14:06:11.463892 3588 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:11.519052 3588 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:11.520514 3588 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:11.520948 3588 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:11.523655 3588 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:11.527658 3588 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:11.527875 3588 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:11.528126 3588 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:11.528311 3588 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:11.664700 3588 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:46475
I20250623 14:06:11.664860 3717 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:46475 every 8 connection(s)
I20250623 14:06:11.667234 3588 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:06:11.677346 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3588
I20250623 14:06:11.677845 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250623 14:06:11.687685 3718 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40553
I20250623 14:06:11.688115 3718 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:11.689112 3718 heartbeater.cc:507] Master 127.2.78.62:40553 requested a full tablet report, sending...
I20250623 14:06:11.691176 3261 ts_manager.cc:194] Registered new tserver with Master: f967a5b687bb43c0a9c8ae1fc8d0a476 (127.2.78.3:46475)
I20250623 14:06:11.692435 3261 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:54361
I20250623 14:06:11.697525 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:11.732546 3261 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:58804:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250623 14:06:11.751638 3261 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250623 14:06:11.810119 3384 tablet_service.cc:1468] Processing CreateTablet for tablet 7c389a0e82384834847cc1e1e44a5532 (DEFAULT_TABLE table=TestTable [id=5d1124ca4cd848e398d750caa790c98b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:11.810050 3519 tablet_service.cc:1468] Processing CreateTablet for tablet 7c389a0e82384834847cc1e1e44a5532 (DEFAULT_TABLE table=TestTable [id=5d1124ca4cd848e398d750caa790c98b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:11.810712 3653 tablet_service.cc:1468] Processing CreateTablet for tablet 7c389a0e82384834847cc1e1e44a5532 (DEFAULT_TABLE table=TestTable [id=5d1124ca4cd848e398d750caa790c98b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:11.811645 3384 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7c389a0e82384834847cc1e1e44a5532. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:11.811986 3519 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7c389a0e82384834847cc1e1e44a5532. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:11.812589 3653 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7c389a0e82384834847cc1e1e44a5532. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:11.832641 3737 tablet_bootstrap.cc:492] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: Bootstrap starting.
I20250623 14:06:11.839607 3737 tablet_bootstrap.cc:654] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:11.839704 3738 tablet_bootstrap.cc:492] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: Bootstrap starting.
I20250623 14:06:11.842160 3737 log.cc:826] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:11.842839 3739 tablet_bootstrap.cc:492] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: Bootstrap starting.
I20250623 14:06:11.850562 3739 tablet_bootstrap.cc:654] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:11.853034 3739 log.cc:826] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:11.848265 3738 tablet_bootstrap.cc:654] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:11.856397 3738 log.cc:826] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:11.864454 3739 tablet_bootstrap.cc:492] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: No bootstrap required, opened a new log
I20250623 14:06:11.865118 3739 ts_tablet_manager.cc:1397] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: Time spent bootstrapping tablet: real 0.023s user 0.012s sys 0.009s
I20250623 14:06:11.868916 3737 tablet_bootstrap.cc:492] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: No bootstrap required, opened a new log
I20250623 14:06:11.869503 3737 ts_tablet_manager.cc:1397] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: Time spent bootstrapping tablet: real 0.037s user 0.013s sys 0.023s
I20250623 14:06:11.874090 3738 tablet_bootstrap.cc:492] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: No bootstrap required, opened a new log
I20250623 14:06:11.874642 3738 ts_tablet_manager.cc:1397] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: Time spent bootstrapping tablet: real 0.035s user 0.017s sys 0.006s
I20250623 14:06:11.890823 3739 raft_consensus.cc:357] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:11.892072 3739 raft_consensus.cc:738] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f967a5b687bb43c0a9c8ae1fc8d0a476, State: Initialized, Role: FOLLOWER
I20250623 14:06:11.892855 3739 consensus_queue.cc:260] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:11.901993 3718 heartbeater.cc:499] Master 127.2.78.62:40553 was elected leader, sending a full tablet report...
I20250623 14:06:11.902272 3737 raft_consensus.cc:357] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:11.903182 3737 raft_consensus.cc:738] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 560f4f603868445d812a943bff92feec, State: Initialized, Role: FOLLOWER
I20250623 14:06:11.904040 3739 ts_tablet_manager.cc:1428] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: Time spent starting tablet: real 0.039s user 0.031s sys 0.003s
I20250623 14:06:11.903915 3737 consensus_queue.cc:260] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:11.906033 3738 raft_consensus.cc:357] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:11.907069 3738 raft_consensus.cc:738] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: bc7735d426aa48a7be1d17e334cc0dce, State: Initialized, Role: FOLLOWER
I20250623 14:06:11.907887 3738 consensus_queue.cc:260] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:11.908510 3737 ts_tablet_manager.cc:1428] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: Time spent starting tablet: real 0.039s user 0.024s sys 0.006s
I20250623 14:06:11.911926 3738 ts_tablet_manager.cc:1428] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: Time spent starting tablet: real 0.037s user 0.030s sys 0.002s
W20250623 14:06:11.921576 3719 tablet.cc:2378] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:11.928915 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:11.932111 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 560f4f603868445d812a943bff92feec to finish bootstrapping
I20250623 14:06:11.944878 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver bc7735d426aa48a7be1d17e334cc0dce to finish bootstrapping
I20250623 14:06:11.955224 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f967a5b687bb43c0a9c8ae1fc8d0a476 to finish bootstrapping
I20250623 14:06:11.996367 3405 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532"
dest_uuid: "560f4f603868445d812a943bff92feec"
from {username='slave'} at 127.0.0.1:48106
I20250623 14:06:11.996945 3405 raft_consensus.cc:491] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250623 14:06:11.997241 3405 raft_consensus.cc:3058] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:12.001684 3405 raft_consensus.cc:513] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:12.004006 3405 leader_election.cc:290] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [CANDIDATE]: Term 1 election: Requested vote from peers bc7735d426aa48a7be1d17e334cc0dce (127.2.78.2:41619), f967a5b687bb43c0a9c8ae1fc8d0a476 (127.2.78.3:46475)
I20250623 14:06:12.013721 2360 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250623 14:06:12.017380 3673 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532" candidate_uuid: "560f4f603868445d812a943bff92feec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476"
I20250623 14:06:12.017405 3539 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532" candidate_uuid: "560f4f603868445d812a943bff92feec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "bc7735d426aa48a7be1d17e334cc0dce"
I20250623 14:06:12.017971 3673 raft_consensus.cc:3058] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:12.018081 3539 raft_consensus.cc:3058] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:12.022485 3673 raft_consensus.cc:2466] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 560f4f603868445d812a943bff92feec in term 1.
I20250623 14:06:12.022512 3539 raft_consensus.cc:2466] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 560f4f603868445d812a943bff92feec in term 1.
I20250623 14:06:12.023694 3340 leader_election.cc:304] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 3 yes votes; 0 no votes. yes voters: 560f4f603868445d812a943bff92feec, bc7735d426aa48a7be1d17e334cc0dce, f967a5b687bb43c0a9c8ae1fc8d0a476; no voters:
I20250623 14:06:12.024593 3744 raft_consensus.cc:2802] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:12.026417 3744 raft_consensus.cc:695] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 LEADER]: Becoming Leader. State: Replica: 560f4f603868445d812a943bff92feec, State: Running, Role: LEADER
I20250623 14:06:12.027161 3744 consensus_queue.cc:237] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:12.041944 3258 catalog_manager.cc:5582] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec reported cstate change: term changed from 0 to 1, leader changed from <none> to 560f4f603868445d812a943bff92feec (127.2.78.1). New cstate: current_term: 1 leader_uuid: "560f4f603868445d812a943bff92feec" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } health_report { overall_health: UNKNOWN } } }
W20250623 14:06:12.045405 3451 tablet.cc:2378] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250623 14:06:12.105600 3585 tablet.cc:2378] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:12.119097 2360 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250623 14:06:12.324170 2360 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250623 14:06:12.409657 3744 consensus_queue.cc:1035] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [LEADER]: Connected to new peer: Peer: permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250623 14:06:12.423977 3755 consensus_queue.cc:1035] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [LEADER]: Connected to new peer: Peer: permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:06:14.392308 3405 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532"
dest_uuid: "560f4f603868445d812a943bff92feec"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:48112
I20250623 14:06:14.392812 3405 raft_consensus.cc:604] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 LEADER]: Received request to transfer leadership
I20250623 14:06:14.493218 3780 raft_consensus.cc:991] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec: : Instructing follower f967a5b687bb43c0a9c8ae1fc8d0a476 to start an election
I20250623 14:06:14.493631 3755 raft_consensus.cc:1079] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 LEADER]: Signalling peer f967a5b687bb43c0a9c8ae1fc8d0a476 to start an election
I20250623 14:06:14.495010 3673 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532"
dest_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476"
from {username='slave'} at 127.2.78.1:46885
I20250623 14:06:14.495488 3673 raft_consensus.cc:491] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250623 14:06:14.495777 3673 raft_consensus.cc:3058] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:06:14.500010 3673 raft_consensus.cc:513] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:14.502066 3673 leader_election.cc:290] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [CANDIDATE]: Term 2 election: Requested vote from peers bc7735d426aa48a7be1d17e334cc0dce (127.2.78.2:41619), 560f4f603868445d812a943bff92feec (127.2.78.1:40231)
I20250623 14:06:14.513087 3539 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532" candidate_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "bc7735d426aa48a7be1d17e334cc0dce"
I20250623 14:06:14.513116 3405 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532" candidate_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "560f4f603868445d812a943bff92feec"
I20250623 14:06:14.513577 3539 raft_consensus.cc:3058] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:06:14.513622 3405 raft_consensus.cc:3053] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 LEADER]: Stepping down as leader of term 1
I20250623 14:06:14.513892 3405 raft_consensus.cc:738] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 560f4f603868445d812a943bff92feec, State: Running, Role: LEADER
I20250623 14:06:14.514379 3405 consensus_queue.cc:260] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:14.515224 3405 raft_consensus.cc:3058] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:06:14.517586 3539 raft_consensus.cc:2466] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f967a5b687bb43c0a9c8ae1fc8d0a476 in term 2.
I20250623 14:06:14.518600 3605 leader_election.cc:304] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: bc7735d426aa48a7be1d17e334cc0dce, f967a5b687bb43c0a9c8ae1fc8d0a476; no voters:
I20250623 14:06:14.519510 3405 raft_consensus.cc:2466] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f967a5b687bb43c0a9c8ae1fc8d0a476 in term 2.
I20250623 14:06:14.520823 3784 raft_consensus.cc:2802] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:06:14.522126 3784 raft_consensus.cc:695] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [term 2 LEADER]: Becoming Leader. State: Replica: f967a5b687bb43c0a9c8ae1fc8d0a476, State: Running, Role: LEADER
I20250623 14:06:14.522846 3784 consensus_queue.cc:237] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } }
I20250623 14:06:14.531039 3261 catalog_manager.cc:5582] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 reported cstate change: term changed from 1 to 2, leader changed from 560f4f603868445d812a943bff92feec (127.2.78.1) to f967a5b687bb43c0a9c8ae1fc8d0a476 (127.2.78.3). New cstate: current_term: 2 leader_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f967a5b687bb43c0a9c8ae1fc8d0a476" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 46475 } health_report { overall_health: HEALTHY } } }
I20250623 14:06:14.974285 3539 raft_consensus.cc:1273] T 7c389a0e82384834847cc1e1e44a5532 P bc7735d426aa48a7be1d17e334cc0dce [term 2 FOLLOWER]: Refusing update from remote peer f967a5b687bb43c0a9c8ae1fc8d0a476: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250623 14:06:14.975471 3784 consensus_queue.cc:1035] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [LEADER]: Connected to new peer: Peer: permanent_uuid: "bc7735d426aa48a7be1d17e334cc0dce" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 41619 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250623 14:06:14.977998 3405 raft_consensus.cc:1273] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 2 FOLLOWER]: Refusing update from remote peer f967a5b687bb43c0a9c8ae1fc8d0a476: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250623 14:06:14.979224 3784 consensus_queue.cc:1035] T 7c389a0e82384834847cc1e1e44a5532 P f967a5b687bb43c0a9c8ae1fc8d0a476 [LEADER]: Connected to new peer: Peer: permanent_uuid: "560f4f603868445d812a943bff92feec" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 40231 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250623 14:06:15.289134 3292 debug-util.cc:398] Leaking SignalData structure 0x7b080009a2c0 after lost signal to thread 3228
I20250623 14:06:17.557240 3405 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "7c389a0e82384834847cc1e1e44a5532"
dest_uuid: "560f4f603868445d812a943bff92feec"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:48120
I20250623 14:06:17.557868 3405 raft_consensus.cc:604] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 2 FOLLOWER]: Received request to transfer leadership
I20250623 14:06:17.558209 3405 raft_consensus.cc:612] T 7c389a0e82384834847cc1e1e44a5532 P 560f4f603868445d812a943bff92feec [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250623 14:06:18.596710 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3320
I20250623 14:06:18.622581 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3454
I20250623 14:06:18.646941 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3588
I20250623 14:06:18.674858 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3227
2025-06-23T14:06:18Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (14464 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250623 14:06:18.733559 2360 test_util.cc:276] Using random seed: -1220658216
I20250623 14:06:18.737726 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:06:18.737927 2360 ts_itest-base.cc:116] --------------
I20250623 14:06:18.738054 2360 ts_itest-base.cc:117] 3 tablet servers
I20250623 14:06:18.738152 2360 ts_itest-base.cc:118] 3 replicas per TS
I20250623 14:06:18.738255 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:06:18Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:06:18Z Disabled control of system clock
I20250623 14:06:18.774786 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:35193
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:45691
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:35193 with env {}
W20250623 14:06:19.070492 3828 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:19.071046 3828 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:19.071440 3828 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:19.101644 3828 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:06:19.101963 3828 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:19.102165 3828 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:06:19.102355 3828 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:06:19.136933 3828 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:45691
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:35193
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:35193
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:19.138203 3828 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:19.139765 3828 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:19.155896 3834 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:19.155910 3835 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:19.157555 3828 server_base.cc:1048] running on GCE node
W20250623 14:06:19.155954 3837 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:20.331837 3828 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:20.334452 3828 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:20.335902 3828 hybrid_clock.cc:648] HybridClock initialized: now 1750687580335866 us; error 60 us; skew 500 ppm
I20250623 14:06:20.336727 3828 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:20.347723 3828 webserver.cc:469] Webserver started at http://127.2.78.62:33221/ using document root <none> and password file <none>
I20250623 14:06:20.348719 3828 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:20.348976 3828 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:20.349476 3828 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:20.355594 3828 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "ce36319b721544e3a749330db91f6d2a"
format_stamp: "Formatted at 2025-06-23 14:06:20 on dist-test-slave-stbh"
I20250623 14:06:20.356719 3828 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "ce36319b721544e3a749330db91f6d2a"
format_stamp: "Formatted at 2025-06-23 14:06:20 on dist-test-slave-stbh"
I20250623 14:06:20.363942 3828 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.002s sys 0.007s
I20250623 14:06:20.369519 3844 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:20.370668 3828 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250623 14:06:20.370999 3828 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "ce36319b721544e3a749330db91f6d2a"
format_stamp: "Formatted at 2025-06-23 14:06:20 on dist-test-slave-stbh"
I20250623 14:06:20.371357 3828 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:20.424753 3828 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:20.426199 3828 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:20.426630 3828 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:20.495666 3828 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:35193
I20250623 14:06:20.495749 3895 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:35193 every 8 connection(s)
I20250623 14:06:20.498487 3828 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:06:20.503417 3896 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:20.507268 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3828
I20250623 14:06:20.507679 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:06:20.525651 3896 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a: Bootstrap starting.
I20250623 14:06:20.531656 3896 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:20.533280 3896 log.cc:826] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:20.537905 3896 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a: No bootstrap required, opened a new log
I20250623 14:06:20.555572 3896 raft_consensus.cc:357] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ce36319b721544e3a749330db91f6d2a" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 35193 } }
I20250623 14:06:20.556253 3896 raft_consensus.cc:383] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:20.556471 3896 raft_consensus.cc:738] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ce36319b721544e3a749330db91f6d2a, State: Initialized, Role: FOLLOWER
I20250623 14:06:20.557190 3896 consensus_queue.cc:260] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ce36319b721544e3a749330db91f6d2a" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 35193 } }
I20250623 14:06:20.557817 3896 raft_consensus.cc:397] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:20.558127 3896 raft_consensus.cc:491] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:20.558446 3896 raft_consensus.cc:3058] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:20.562467 3896 raft_consensus.cc:513] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ce36319b721544e3a749330db91f6d2a" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 35193 } }
I20250623 14:06:20.563143 3896 leader_election.cc:304] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ce36319b721544e3a749330db91f6d2a; no voters:
I20250623 14:06:20.564895 3896 leader_election.cc:290] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:20.565673 3901 raft_consensus.cc:2802] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:20.567790 3901 raft_consensus.cc:695] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [term 1 LEADER]: Becoming Leader. State: Replica: ce36319b721544e3a749330db91f6d2a, State: Running, Role: LEADER
I20250623 14:06:20.568534 3901 consensus_queue.cc:237] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ce36319b721544e3a749330db91f6d2a" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 35193 } }
I20250623 14:06:20.569094 3896 sys_catalog.cc:564] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:06:20.580237 3902 sys_catalog.cc:455] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "ce36319b721544e3a749330db91f6d2a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ce36319b721544e3a749330db91f6d2a" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 35193 } } }
I20250623 14:06:20.580636 3903 sys_catalog.cc:455] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [sys.catalog]: SysCatalogTable state changed. Reason: New leader ce36319b721544e3a749330db91f6d2a. Latest consensus state: current_term: 1 leader_uuid: "ce36319b721544e3a749330db91f6d2a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ce36319b721544e3a749330db91f6d2a" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 35193 } } }
I20250623 14:06:20.581166 3902 sys_catalog.cc:458] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:20.581589 3903 sys_catalog.cc:458] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:20.588573 3911 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:06:20.599473 3911 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:06:20.616420 3911 catalog_manager.cc:1349] Generated new cluster ID: 3b23d8c1f54b48e5a98611e916f914ca
I20250623 14:06:20.616729 3911 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:06:20.667335 3911 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:06:20.668809 3911 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:06:20.684420 3911 catalog_manager.cc:5955] T 00000000000000000000000000000000 P ce36319b721544e3a749330db91f6d2a: Generated new TSK 0
I20250623 14:06:20.685344 3911 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:06:20.703270 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:35193
--builtin_ntp_servers=127.2.78.20:45691
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250623 14:06:21.009702 3920 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:21.010229 3920 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:21.010715 3920 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:21.043272 3920 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:21.044099 3920 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:06:21.078656 3920 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:45691
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:35193
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:21.079941 3920 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:21.081526 3920 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:21.098559 3927 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:21.101575 3929 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:21.098810 3926 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:22.276805 3928 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:06:22.277418 3920 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:06:22.281003 3920 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:22.283726 3920 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:22.285173 3920 hybrid_clock.cc:648] HybridClock initialized: now 1750687582285115 us; error 77 us; skew 500 ppm
I20250623 14:06:22.286021 3920 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:22.293133 3920 webserver.cc:469] Webserver started at http://127.2.78.1:44177/ using document root <none> and password file <none>
I20250623 14:06:22.294080 3920 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:22.294303 3920 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:22.294763 3920 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:22.299290 3920 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "fcff1c8ff2cf425bb93df1749a2488cc"
format_stamp: "Formatted at 2025-06-23 14:06:22 on dist-test-slave-stbh"
I20250623 14:06:22.300372 3920 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "fcff1c8ff2cf425bb93df1749a2488cc"
format_stamp: "Formatted at 2025-06-23 14:06:22 on dist-test-slave-stbh"
I20250623 14:06:22.307277 3920 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.004s sys 0.004s
I20250623 14:06:22.312670 3936 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:22.313601 3920 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250623 14:06:22.313913 3920 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "fcff1c8ff2cf425bb93df1749a2488cc"
format_stamp: "Formatted at 2025-06-23 14:06:22 on dist-test-slave-stbh"
I20250623 14:06:22.314229 3920 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:22.364408 3920 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:22.365919 3920 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:22.366355 3920 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:22.368773 3920 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:22.372807 3920 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:22.373015 3920 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:22.373278 3920 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:22.373445 3920 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:22.511078 3920 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:46187
I20250623 14:06:22.511149 4049 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:46187 every 8 connection(s)
I20250623 14:06:22.514142 3920 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:06:22.522935 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 3920
I20250623 14:06:22.523309 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:06:22.528566 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:35193
--builtin_ntp_servers=127.2.78.20:45691
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:06:22.533439 4050 heartbeater.cc:344] Connected to a master server at 127.2.78.62:35193
I20250623 14:06:22.533870 4050 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:22.534816 4050 heartbeater.cc:507] Master 127.2.78.62:35193 requested a full tablet report, sending...
I20250623 14:06:22.538178 3861 ts_manager.cc:194] Registered new tserver with Master: fcff1c8ff2cf425bb93df1749a2488cc (127.2.78.1:46187)
I20250623 14:06:22.540992 3861 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:50577
W20250623 14:06:22.821189 4054 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:22.821648 4054 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:22.822144 4054 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:22.854597 4054 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:22.855402 4054 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:06:22.889640 4054 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:45691
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:35193
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:22.890902 4054 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:22.892485 4054 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:22.908273 4060 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:23.544291 4050 heartbeater.cc:499] Master 127.2.78.62:35193 was elected leader, sending a full tablet report...
W20250623 14:06:22.908371 4061 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:22.910084 4054 server_base.cc:1048] running on GCE node
W20250623 14:06:22.910956 4063 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:24.061340 4054 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:24.064103 4054 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:24.065541 4054 hybrid_clock.cc:648] HybridClock initialized: now 1750687584065474 us; error 86 us; skew 500 ppm
I20250623 14:06:24.066344 4054 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:24.073143 4054 webserver.cc:469] Webserver started at http://127.2.78.2:33909/ using document root <none> and password file <none>
I20250623 14:06:24.074092 4054 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:24.074285 4054 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:24.074707 4054 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:24.079257 4054 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "cb71545b731746bab3409baff8c9d094"
format_stamp: "Formatted at 2025-06-23 14:06:24 on dist-test-slave-stbh"
I20250623 14:06:24.080348 4054 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "cb71545b731746bab3409baff8c9d094"
format_stamp: "Formatted at 2025-06-23 14:06:24 on dist-test-slave-stbh"
I20250623 14:06:24.087599 4054 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.004s
I20250623 14:06:24.093261 4070 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:24.094285 4054 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.002s
I20250623 14:06:24.094607 4054 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "cb71545b731746bab3409baff8c9d094"
format_stamp: "Formatted at 2025-06-23 14:06:24 on dist-test-slave-stbh"
I20250623 14:06:24.094947 4054 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:24.144953 4054 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:24.146379 4054 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:24.146800 4054 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:24.149191 4054 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:24.153064 4054 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:24.153265 4054 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:24.153499 4054 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:24.153656 4054 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:24.298147 4054 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:42627
I20250623 14:06:24.298321 4183 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:42627 every 8 connection(s)
I20250623 14:06:24.300724 4054 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:06:24.302103 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4054
I20250623 14:06:24.302699 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250623 14:06:24.310225 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:35193
--builtin_ntp_servers=127.2.78.20:45691
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:06:24.321880 4184 heartbeater.cc:344] Connected to a master server at 127.2.78.62:35193
I20250623 14:06:24.322291 4184 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:24.323269 4184 heartbeater.cc:507] Master 127.2.78.62:35193 requested a full tablet report, sending...
I20250623 14:06:24.325417 3861 ts_manager.cc:194] Registered new tserver with Master: cb71545b731746bab3409baff8c9d094 (127.2.78.2:42627)
I20250623 14:06:24.327359 3861 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:40663
W20250623 14:06:24.606073 4188 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:24.606534 4188 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:24.607096 4188 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:24.637974 4188 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:24.638828 4188 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:06:24.672994 4188 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:45691
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:35193
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:24.674325 4188 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:24.675863 4188 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:24.690910 4194 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:25.330516 4184 heartbeater.cc:499] Master 127.2.78.62:35193 was elected leader, sending a full tablet report...
W20250623 14:06:24.691201 4195 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:24.693267 4188 server_base.cc:1048] running on GCE node
W20250623 14:06:24.691298 4197 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:25.846899 4188 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:25.849210 4188 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:25.850569 4188 hybrid_clock.cc:648] HybridClock initialized: now 1750687585850528 us; error 51 us; skew 500 ppm
I20250623 14:06:25.851377 4188 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:25.864830 4188 webserver.cc:469] Webserver started at http://127.2.78.3:41865/ using document root <none> and password file <none>
I20250623 14:06:25.865811 4188 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:25.866034 4188 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:25.866494 4188 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:25.871136 4188 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "572f664e3b3f4c58975354ed86237c41"
format_stamp: "Formatted at 2025-06-23 14:06:25 on dist-test-slave-stbh"
I20250623 14:06:25.872478 4188 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "572f664e3b3f4c58975354ed86237c41"
format_stamp: "Formatted at 2025-06-23 14:06:25 on dist-test-slave-stbh"
I20250623 14:06:25.880112 4188 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.002s sys 0.007s
I20250623 14:06:25.885708 4205 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:25.886832 4188 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.002s
I20250623 14:06:25.887173 4188 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "572f664e3b3f4c58975354ed86237c41"
format_stamp: "Formatted at 2025-06-23 14:06:25 on dist-test-slave-stbh"
I20250623 14:06:25.887537 4188 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:25.953353 4188 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:25.954833 4188 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:25.955303 4188 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:25.957824 4188 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:25.961968 4188 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:25.962175 4188 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:25.962432 4188 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:25.962625 4188 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:26.093915 4188 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:33817
I20250623 14:06:26.094004 4318 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:33817 every 8 connection(s)
I20250623 14:06:26.096429 4188 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:06:26.105631 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4188
I20250623 14:06:26.106405 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250623 14:06:26.118548 4319 heartbeater.cc:344] Connected to a master server at 127.2.78.62:35193
I20250623 14:06:26.118932 4319 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:26.119850 4319 heartbeater.cc:507] Master 127.2.78.62:35193 requested a full tablet report, sending...
I20250623 14:06:26.121997 3861 ts_manager.cc:194] Registered new tserver with Master: 572f664e3b3f4c58975354ed86237c41 (127.2.78.3:33817)
I20250623 14:06:26.123333 3861 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:39967
I20250623 14:06:26.127059 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:26.161069 3861 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:43384:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250623 14:06:26.180687 3861 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250623 14:06:26.232225 4119 tablet_service.cc:1468] Processing CreateTablet for tablet bc352633c0514e5f89137a85e5dddb6a (DEFAULT_TABLE table=TestTable [id=3fa1dd4b1811400ea8d8188aece7e2bf]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:26.234196 4119 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bc352633c0514e5f89137a85e5dddb6a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:26.239158 3984 tablet_service.cc:1468] Processing CreateTablet for tablet bc352633c0514e5f89137a85e5dddb6a (DEFAULT_TABLE table=TestTable [id=3fa1dd4b1811400ea8d8188aece7e2bf]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:26.242069 3984 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bc352633c0514e5f89137a85e5dddb6a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:26.243216 4254 tablet_service.cc:1468] Processing CreateTablet for tablet bc352633c0514e5f89137a85e5dddb6a (DEFAULT_TABLE table=TestTable [id=3fa1dd4b1811400ea8d8188aece7e2bf]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:26.245136 4254 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bc352633c0514e5f89137a85e5dddb6a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:26.257769 4338 tablet_bootstrap.cc:492] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094: Bootstrap starting.
I20250623 14:06:26.263257 4338 tablet_bootstrap.cc:654] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:26.266114 4338 log.cc:826] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:26.270213 4340 tablet_bootstrap.cc:492] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: Bootstrap starting.
I20250623 14:06:26.271977 4338 tablet_bootstrap.cc:492] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094: No bootstrap required, opened a new log
I20250623 14:06:26.272473 4338 ts_tablet_manager.cc:1397] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094: Time spent bootstrapping tablet: real 0.015s user 0.000s sys 0.014s
I20250623 14:06:26.273442 4341 tablet_bootstrap.cc:492] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: Bootstrap starting.
I20250623 14:06:26.278430 4340 tablet_bootstrap.cc:654] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:26.279573 4341 tablet_bootstrap.cc:654] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:26.281056 4340 log.cc:826] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:26.281244 4341 log.cc:826] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:26.286633 4340 tablet_bootstrap.cc:492] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: No bootstrap required, opened a new log
I20250623 14:06:26.286911 4341 tablet_bootstrap.cc:492] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: No bootstrap required, opened a new log
I20250623 14:06:26.287051 4340 ts_tablet_manager.cc:1397] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: Time spent bootstrapping tablet: real 0.017s user 0.015s sys 0.000s
I20250623 14:06:26.287410 4341 ts_tablet_manager.cc:1397] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: Time spent bootstrapping tablet: real 0.014s user 0.012s sys 0.000s
I20250623 14:06:26.308068 4340 raft_consensus.cc:357] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.309180 4340 raft_consensus.cc:383] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:26.309573 4340 raft_consensus.cc:738] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fcff1c8ff2cf425bb93df1749a2488cc, State: Initialized, Role: FOLLOWER
I20250623 14:06:26.310695 4340 consensus_queue.cc:260] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.312781 4338 raft_consensus.cc:357] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.314035 4338 raft_consensus.cc:383] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:26.314435 4338 raft_consensus.cc:738] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cb71545b731746bab3409baff8c9d094, State: Initialized, Role: FOLLOWER
I20250623 14:06:26.315583 4338 consensus_queue.cc:260] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.316700 4341 raft_consensus.cc:357] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.318737 4341 raft_consensus.cc:383] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:26.319104 4341 raft_consensus.cc:738] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 572f664e3b3f4c58975354ed86237c41, State: Initialized, Role: FOLLOWER
I20250623 14:06:26.320219 4341 consensus_queue.cc:260] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.326064 4319 heartbeater.cc:499] Master 127.2.78.62:35193 was elected leader, sending a full tablet report...
I20250623 14:06:26.327970 4341 ts_tablet_manager.cc:1428] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: Time spent starting tablet: real 0.040s user 0.035s sys 0.000s
I20250623 14:06:26.330875 4340 ts_tablet_manager.cc:1428] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: Time spent starting tablet: real 0.044s user 0.027s sys 0.015s
I20250623 14:06:26.333674 4338 ts_tablet_manager.cc:1428] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094: Time spent starting tablet: real 0.061s user 0.034s sys 0.011s
I20250623 14:06:26.340572 4347 raft_consensus.cc:491] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:06:26.341112 4347 raft_consensus.cc:513] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.345552 4347 leader_election.cc:290] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fcff1c8ff2cf425bb93df1749a2488cc (127.2.78.1:46187), cb71545b731746bab3409baff8c9d094 (127.2.78.2:42627)
W20250623 14:06:26.350999 4320 tablet.cc:2378] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:26.354751 4005 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "bc352633c0514e5f89137a85e5dddb6a" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" is_pre_election: true
I20250623 14:06:26.355613 4005 raft_consensus.cc:2466] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 0.
I20250623 14:06:26.357087 4207 leader_election.cc:304] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 572f664e3b3f4c58975354ed86237c41, fcff1c8ff2cf425bb93df1749a2488cc; no voters:
I20250623 14:06:26.357946 4347 raft_consensus.cc:2802] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:06:26.358299 4347 raft_consensus.cc:491] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:06:26.358623 4347 raft_consensus.cc:3058] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:26.359997 4139 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "bc352633c0514e5f89137a85e5dddb6a" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cb71545b731746bab3409baff8c9d094" is_pre_election: true
I20250623 14:06:26.360872 4139 raft_consensus.cc:2466] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 0.
I20250623 14:06:26.364029 4347 raft_consensus.cc:513] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.365329 4347 leader_election.cc:290] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 election: Requested vote from peers fcff1c8ff2cf425bb93df1749a2488cc (127.2.78.1:46187), cb71545b731746bab3409baff8c9d094 (127.2.78.2:42627)
I20250623 14:06:26.366231 4005 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "bc352633c0514e5f89137a85e5dddb6a" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fcff1c8ff2cf425bb93df1749a2488cc"
I20250623 14:06:26.366253 4139 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "bc352633c0514e5f89137a85e5dddb6a" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cb71545b731746bab3409baff8c9d094"
I20250623 14:06:26.366662 4139 raft_consensus.cc:3058] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:26.366645 4005 raft_consensus.cc:3058] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:26.370981 4139 raft_consensus.cc:2466] T bc352633c0514e5f89137a85e5dddb6a P cb71545b731746bab3409baff8c9d094 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 1.
I20250623 14:06:26.370973 4005 raft_consensus.cc:2466] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 1.
I20250623 14:06:26.371829 4207 leader_election.cc:304] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 572f664e3b3f4c58975354ed86237c41, cb71545b731746bab3409baff8c9d094; no voters:
I20250623 14:06:26.372427 4347 raft_consensus.cc:2802] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:26.374009 4347 raft_consensus.cc:695] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [term 1 LEADER]: Becoming Leader. State: Replica: 572f664e3b3f4c58975354ed86237c41, State: Running, Role: LEADER
I20250623 14:06:26.374713 4347 consensus_queue.cc:237] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.385823 3859 catalog_manager.cc:5582] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 reported cstate change: term changed from 0 to 1, leader changed from <none> to 572f664e3b3f4c58975354ed86237c41 (127.2.78.3). New cstate: current_term: 1 leader_uuid: "572f664e3b3f4c58975354ed86237c41" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } health_report { overall_health: HEALTHY } } }
I20250623 14:06:26.437026 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:26.440192 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver fcff1c8ff2cf425bb93df1749a2488cc to finish bootstrapping
I20250623 14:06:26.452354 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver cb71545b731746bab3409baff8c9d094 to finish bootstrapping
I20250623 14:06:26.462034 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 572f664e3b3f4c58975354ed86237c41 to finish bootstrapping
I20250623 14:06:26.474249 3859 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:43384:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250623 14:06:26.475750 3859 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250623 14:06:26.491334 3984 tablet_service.cc:1468] Processing CreateTablet for tablet 112ea0ad204d48dab77435529ccf3bd8 (DEFAULT_TABLE table=TestAnotherTable [id=99200accf64f486b99e49c503f2fdab9]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250623 14:06:26.491762 4119 tablet_service.cc:1468] Processing CreateTablet for tablet 112ea0ad204d48dab77435529ccf3bd8 (DEFAULT_TABLE table=TestAnotherTable [id=99200accf64f486b99e49c503f2fdab9]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250623 14:06:26.492412 3984 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 112ea0ad204d48dab77435529ccf3bd8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:26.492806 4119 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 112ea0ad204d48dab77435529ccf3bd8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:26.492760 4254 tablet_service.cc:1468] Processing CreateTablet for tablet 112ea0ad204d48dab77435529ccf3bd8 (DEFAULT_TABLE table=TestAnotherTable [id=99200accf64f486b99e49c503f2fdab9]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250623 14:06:26.493808 4254 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 112ea0ad204d48dab77435529ccf3bd8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:26.506732 4340 tablet_bootstrap.cc:492] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc: Bootstrap starting.
I20250623 14:06:26.507324 4338 tablet_bootstrap.cc:492] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094: Bootstrap starting.
I20250623 14:06:26.512898 4340 tablet_bootstrap.cc:654] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:26.513274 4338 tablet_bootstrap.cc:654] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:26.515321 4341 tablet_bootstrap.cc:492] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41: Bootstrap starting.
W20250623 14:06:26.522673 4051 tablet.cc:2378] T bc352633c0514e5f89137a85e5dddb6a P fcff1c8ff2cf425bb93df1749a2488cc: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:26.522184 4341 tablet_bootstrap.cc:654] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:26.529696 4338 tablet_bootstrap.cc:492] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094: No bootstrap required, opened a new log
I20250623 14:06:26.530246 4338 ts_tablet_manager.cc:1397] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094: Time spent bootstrapping tablet: real 0.023s user 0.011s sys 0.003s
I20250623 14:06:26.532847 4338 raft_consensus.cc:357] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.533524 4338 raft_consensus.cc:383] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:26.533820 4338 raft_consensus.cc:738] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cb71545b731746bab3409baff8c9d094, State: Initialized, Role: FOLLOWER
I20250623 14:06:26.534973 4341 tablet_bootstrap.cc:492] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41: No bootstrap required, opened a new log
I20250623 14:06:26.534610 4338 consensus_queue.cc:260] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.535441 4341 ts_tablet_manager.cc:1397] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41: Time spent bootstrapping tablet: real 0.020s user 0.006s sys 0.011s
I20250623 14:06:26.537297 4340 tablet_bootstrap.cc:492] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc: No bootstrap required, opened a new log
I20250623 14:06:26.537724 4340 ts_tablet_manager.cc:1397] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc: Time spent bootstrapping tablet: real 0.031s user 0.012s sys 0.004s
I20250623 14:06:26.537889 4341 raft_consensus.cc:357] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.538530 4341 raft_consensus.cc:383] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:26.538796 4341 raft_consensus.cc:738] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 572f664e3b3f4c58975354ed86237c41, State: Initialized, Role: FOLLOWER
I20250623 14:06:26.539534 4341 consensus_queue.cc:260] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.540206 4340 raft_consensus.cc:357] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.540849 4340 raft_consensus.cc:383] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:26.541108 4340 raft_consensus.cc:738] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fcff1c8ff2cf425bb93df1749a2488cc, State: Initialized, Role: FOLLOWER
I20250623 14:06:26.541700 4340 consensus_queue.cc:260] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.543133 4338 ts_tablet_manager.cc:1428] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094: Time spent starting tablet: real 0.013s user 0.002s sys 0.004s
I20250623 14:06:26.545483 4340 ts_tablet_manager.cc:1428] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc: Time spent starting tablet: real 0.007s user 0.003s sys 0.002s
I20250623 14:06:26.550637 4341 ts_tablet_manager.cc:1428] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41: Time spent starting tablet: real 0.015s user 0.000s sys 0.006s
W20250623 14:06:26.557675 4185 tablet.cc:2378] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:26.588717 4347 raft_consensus.cc:491] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:06:26.589080 4347 raft_consensus.cc:513] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.590354 4347 leader_election.cc:290] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fcff1c8ff2cf425bb93df1749a2488cc (127.2.78.1:46187), cb71545b731746bab3409baff8c9d094 (127.2.78.2:42627)
I20250623 14:06:26.591188 4005 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "112ea0ad204d48dab77435529ccf3bd8" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" is_pre_election: true
I20250623 14:06:26.591351 4139 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "112ea0ad204d48dab77435529ccf3bd8" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cb71545b731746bab3409baff8c9d094" is_pre_election: true
I20250623 14:06:26.591655 4005 raft_consensus.cc:2466] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 0.
I20250623 14:06:26.591851 4139 raft_consensus.cc:2466] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 0.
I20250623 14:06:26.592437 4207 leader_election.cc:304] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 572f664e3b3f4c58975354ed86237c41, fcff1c8ff2cf425bb93df1749a2488cc; no voters:
I20250623 14:06:26.593106 4347 raft_consensus.cc:2802] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:06:26.593477 4347 raft_consensus.cc:491] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:06:26.593739 4347 raft_consensus.cc:3058] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:26.597563 4347 raft_consensus.cc:513] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.598886 4347 leader_election.cc:290] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 election: Requested vote from peers fcff1c8ff2cf425bb93df1749a2488cc (127.2.78.1:46187), cb71545b731746bab3409baff8c9d094 (127.2.78.2:42627)
I20250623 14:06:26.599577 4005 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "112ea0ad204d48dab77435529ccf3bd8" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fcff1c8ff2cf425bb93df1749a2488cc"
I20250623 14:06:26.599767 4139 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "112ea0ad204d48dab77435529ccf3bd8" candidate_uuid: "572f664e3b3f4c58975354ed86237c41" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cb71545b731746bab3409baff8c9d094"
I20250623 14:06:26.599972 4005 raft_consensus.cc:3058] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:26.600178 4139 raft_consensus.cc:3058] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:26.603864 4005 raft_consensus.cc:2466] T 112ea0ad204d48dab77435529ccf3bd8 P fcff1c8ff2cf425bb93df1749a2488cc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 1.
I20250623 14:06:26.604429 4139 raft_consensus.cc:2466] T 112ea0ad204d48dab77435529ccf3bd8 P cb71545b731746bab3409baff8c9d094 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 572f664e3b3f4c58975354ed86237c41 in term 1.
I20250623 14:06:26.604638 4207 leader_election.cc:304] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 572f664e3b3f4c58975354ed86237c41, fcff1c8ff2cf425bb93df1749a2488cc; no voters:
I20250623 14:06:26.605227 4347 raft_consensus.cc:2802] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:26.605538 4347 raft_consensus.cc:695] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [term 1 LEADER]: Becoming Leader. State: Replica: 572f664e3b3f4c58975354ed86237c41, State: Running, Role: LEADER
I20250623 14:06:26.606113 4347 consensus_queue.cc:237] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } }
I20250623 14:06:26.612448 3861 catalog_manager.cc:5582] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 reported cstate change: term changed from 0 to 1, leader changed from <none> to 572f664e3b3f4c58975354ed86237c41 (127.2.78.3). New cstate: current_term: 1 leader_uuid: "572f664e3b3f4c58975354ed86237c41" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "572f664e3b3f4c58975354ed86237c41" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33817 } health_report { overall_health: HEALTHY } } }
I20250623 14:06:26.835134 4347 consensus_queue.cc:1035] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [LEADER]: Connected to new peer: Peer: permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:06:26.852370 4349 consensus_queue.cc:1035] T bc352633c0514e5f89137a85e5dddb6a P 572f664e3b3f4c58975354ed86237c41 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250623 14:06:26.945983 4356 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:26.946564 4356 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:26.977833 4356 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250623 14:06:27.105691 4347 consensus_queue.cc:1035] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [LEADER]: Connected to new peer: Peer: permanent_uuid: "fcff1c8ff2cf425bb93df1749a2488cc" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46187 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:06:27.121522 4347 consensus_queue.cc:1035] T 112ea0ad204d48dab77435529ccf3bd8 P 572f664e3b3f4c58975354ed86237c41 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cb71545b731746bab3409baff8c9d094" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 42627 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250623 14:06:28.276396 4356 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.258s user 0.436s sys 0.817s
W20250623 14:06:28.276708 4356 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.258s user 0.436s sys 0.817s
W20250623 14:06:29.665437 4378 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:29.666086 4378 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:29.696616 4378 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250623 14:06:30.976963 4378 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.238s user 0.475s sys 0.758s
W20250623 14:06:30.977375 4378 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.238s user 0.475s sys 0.758s
W20250623 14:06:32.373548 4392 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:32.374202 4392 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:32.405252 4392 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250623 14:06:33.687408 4392 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.240s user 0.512s sys 0.727s
W20250623 14:06:33.687713 4392 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.241s user 0.512s sys 0.727s
W20250623 14:06:35.062517 4408 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:35.063131 4408 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:35.097023 4408 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250623 14:06:36.411355 4408 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.264s user 0.450s sys 0.809s
W20250623 14:06:36.411783 4408 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.265s user 0.450s sys 0.809s
I20250623 14:06:37.491374 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3920
I20250623 14:06:37.519093 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4054
I20250623 14:06:37.546023 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4188
I20250623 14:06:37.572870 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 3828
2025-06-23T14:06:37Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (18897 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250623 14:06:37.631137 2360 test_util.cc:276] Using random seed: -1201760632
I20250623 14:06:37.635804 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:06:37.635987 2360 ts_itest-base.cc:116] --------------
I20250623 14:06:37.636150 2360 ts_itest-base.cc:117] 3 tablet servers
I20250623 14:06:37.636291 2360 ts_itest-base.cc:118] 3 replicas per TS
I20250623 14:06:37.636438 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:06:37Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:06:37Z Disabled control of system clock
I20250623 14:06:37.674782 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:37455
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:39175
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:37455
--superuser_acl=no-such-user with env {}
W20250623 14:06:37.973059 4430 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:37.973670 4430 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:37.974220 4430 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:38.005893 4430 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:06:38.006243 4430 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:38.006528 4430 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:06:38.006788 4430 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:06:38.043115 4430 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:39175
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:37455
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:37455
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:38.044425 4430 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:38.046208 4430 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:38.060686 4436 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:38.061655 4439 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:38.060731 4437 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:39.261397 4438 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1199 milliseconds
I20250623 14:06:39.261499 4430 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:06:39.262799 4430 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:39.265909 4430 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:39.267374 4430 hybrid_clock.cc:648] HybridClock initialized: now 1750687599267322 us; error 69 us; skew 500 ppm
I20250623 14:06:39.268134 4430 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:39.274382 4430 webserver.cc:469] Webserver started at http://127.2.78.62:35305/ using document root <none> and password file <none>
I20250623 14:06:39.275322 4430 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:39.275511 4430 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:39.275933 4430 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:39.280225 4430 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "7350ca642b1246f6a2c0684d2d76edff"
format_stamp: "Formatted at 2025-06-23 14:06:39 on dist-test-slave-stbh"
I20250623 14:06:39.281255 4430 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "7350ca642b1246f6a2c0684d2d76edff"
format_stamp: "Formatted at 2025-06-23 14:06:39 on dist-test-slave-stbh"
I20250623 14:06:39.288389 4430 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.000s
I20250623 14:06:39.293606 4446 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:39.294656 4430 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250623 14:06:39.294988 4430 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7350ca642b1246f6a2c0684d2d76edff"
format_stamp: "Formatted at 2025-06-23 14:06:39 on dist-test-slave-stbh"
I20250623 14:06:39.295334 4430 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:39.361153 4430 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:39.362751 4430 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:39.363199 4430 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:39.432122 4430 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:37455
I20250623 14:06:39.432255 4497 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:37455 every 8 connection(s)
I20250623 14:06:39.434855 4430 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:06:39.440434 4498 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:39.441109 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4430
I20250623 14:06:39.441488 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:06:39.464491 4498 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff: Bootstrap starting.
I20250623 14:06:39.469736 4498 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:39.471681 4498 log.cc:826] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:39.475970 4498 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff: No bootstrap required, opened a new log
I20250623 14:06:39.493575 4498 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7350ca642b1246f6a2c0684d2d76edff" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 37455 } }
I20250623 14:06:39.494387 4498 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:39.494599 4498 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7350ca642b1246f6a2c0684d2d76edff, State: Initialized, Role: FOLLOWER
I20250623 14:06:39.495271 4498 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7350ca642b1246f6a2c0684d2d76edff" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 37455 } }
I20250623 14:06:39.495762 4498 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:39.495990 4498 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:39.496248 4498 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:39.500245 4498 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7350ca642b1246f6a2c0684d2d76edff" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 37455 } }
I20250623 14:06:39.500998 4498 leader_election.cc:304] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7350ca642b1246f6a2c0684d2d76edff; no voters:
I20250623 14:06:39.502771 4498 leader_election.cc:290] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:39.503417 4503 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:39.505616 4503 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [term 1 LEADER]: Becoming Leader. State: Replica: 7350ca642b1246f6a2c0684d2d76edff, State: Running, Role: LEADER
I20250623 14:06:39.506314 4503 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7350ca642b1246f6a2c0684d2d76edff" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 37455 } }
I20250623 14:06:39.506872 4498 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:06:39.517144 4504 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7350ca642b1246f6a2c0684d2d76edff" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7350ca642b1246f6a2c0684d2d76edff" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 37455 } } }
I20250623 14:06:39.518054 4504 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:39.519026 4505 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7350ca642b1246f6a2c0684d2d76edff. Latest consensus state: current_term: 1 leader_uuid: "7350ca642b1246f6a2c0684d2d76edff" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7350ca642b1246f6a2c0684d2d76edff" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 37455 } } }
I20250623 14:06:39.521003 4505 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:39.522854 4513 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:06:39.535038 4513 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:06:39.551370 4513 catalog_manager.cc:1349] Generated new cluster ID: aab8ba220256439cb1fc500bd88b0920
I20250623 14:06:39.551671 4513 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:06:39.562469 4513 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:06:39.563886 4513 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:06:39.576622 4513 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 7350ca642b1246f6a2c0684d2d76edff: Generated new TSK 0
I20250623 14:06:39.577527 4513 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:06:39.600302 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:37455
--builtin_ntp_servers=127.2.78.20:39175
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250623 14:06:39.911818 4522 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:39.912344 4522 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:39.912866 4522 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:39.944536 4522 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:39.945413 4522 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:06:39.981405 4522 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:39175
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:37455
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:39.982913 4522 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:39.984629 4522 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:40.002429 4530 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:40.003374 4529 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:40.004173 4532 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:40.004406 4522 server_base.cc:1048] running on GCE node
I20250623 14:06:41.166846 4522 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:41.169579 4522 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:41.171022 4522 hybrid_clock.cc:648] HybridClock initialized: now 1750687601170966 us; error 74 us; skew 500 ppm
I20250623 14:06:41.171795 4522 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:41.178834 4522 webserver.cc:469] Webserver started at http://127.2.78.1:40087/ using document root <none> and password file <none>
I20250623 14:06:41.179733 4522 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:41.179930 4522 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:41.180467 4522 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:41.184988 4522 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "5f3293eb65154372a10a4ce36c1b1f1a"
format_stamp: "Formatted at 2025-06-23 14:06:41 on dist-test-slave-stbh"
I20250623 14:06:41.186115 4522 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "5f3293eb65154372a10a4ce36c1b1f1a"
format_stamp: "Formatted at 2025-06-23 14:06:41 on dist-test-slave-stbh"
I20250623 14:06:41.193348 4522 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.001s
I20250623 14:06:41.199357 4539 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:41.200426 4522 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250623 14:06:41.200727 4522 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "5f3293eb65154372a10a4ce36c1b1f1a"
format_stamp: "Formatted at 2025-06-23 14:06:41 on dist-test-slave-stbh"
I20250623 14:06:41.201045 4522 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:41.260219 4522 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:41.261718 4522 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:41.262193 4522 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:41.264750 4522 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:41.269357 4522 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:41.269565 4522 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:41.269824 4522 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:41.269985 4522 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:41.402633 4522 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:35311
I20250623 14:06:41.402762 4652 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:35311 every 8 connection(s)
I20250623 14:06:41.405418 4522 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:06:41.415731 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4522
I20250623 14:06:41.416119 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:06:41.422219 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:37455
--builtin_ntp_servers=127.2.78.20:39175
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:06:41.426385 4653 heartbeater.cc:344] Connected to a master server at 127.2.78.62:37455
I20250623 14:06:41.426787 4653 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:41.427702 4653 heartbeater.cc:507] Master 127.2.78.62:37455 requested a full tablet report, sending...
I20250623 14:06:41.430321 4463 ts_manager.cc:194] Registered new tserver with Master: 5f3293eb65154372a10a4ce36c1b1f1a (127.2.78.1:35311)
I20250623 14:06:41.432216 4463 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:50215
W20250623 14:06:41.725332 4657 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:41.725894 4657 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:41.726399 4657 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:41.757582 4657 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:41.758459 4657 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:06:41.793637 4657 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:39175
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:37455
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:41.794997 4657 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:41.796638 4657 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:41.812237 4664 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:42.435588 4653 heartbeater.cc:499] Master 127.2.78.62:37455 was elected leader, sending a full tablet report...
W20250623 14:06:41.812263 4663 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:41.814658 4666 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:42.961897 4665 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:06:42.961968 4657 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:06:42.966482 4657 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:42.969123 4657 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:42.970619 4657 hybrid_clock.cc:648] HybridClock initialized: now 1750687602970564 us; error 67 us; skew 500 ppm
I20250623 14:06:42.971431 4657 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:42.983304 4657 webserver.cc:469] Webserver started at http://127.2.78.2:36847/ using document root <none> and password file <none>
I20250623 14:06:42.984244 4657 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:42.984454 4657 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:42.984928 4657 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:42.989492 4657 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "d2fe7a584d0c439fb4303e4d1972ec78"
format_stamp: "Formatted at 2025-06-23 14:06:42 on dist-test-slave-stbh"
I20250623 14:06:42.990643 4657 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "d2fe7a584d0c439fb4303e4d1972ec78"
format_stamp: "Formatted at 2025-06-23 14:06:42 on dist-test-slave-stbh"
I20250623 14:06:42.997466 4657 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250623 14:06:43.002709 4673 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:43.003695 4657 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250623 14:06:43.003985 4657 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d2fe7a584d0c439fb4303e4d1972ec78"
format_stamp: "Formatted at 2025-06-23 14:06:42 on dist-test-slave-stbh"
I20250623 14:06:43.004309 4657 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:43.055366 4657 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:43.056871 4657 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:43.057292 4657 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:43.059654 4657 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:43.063458 4657 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:43.063671 4657 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:43.063944 4657 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:43.064113 4657 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:43.195237 4657 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:45513
I20250623 14:06:43.195340 4786 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:45513 every 8 connection(s)
I20250623 14:06:43.197649 4657 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:06:43.207782 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4657
I20250623 14:06:43.208180 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250623 14:06:43.214511 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:37455
--builtin_ntp_servers=127.2.78.20:39175
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:06:43.217876 4787 heartbeater.cc:344] Connected to a master server at 127.2.78.62:37455
I20250623 14:06:43.218283 4787 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:43.219249 4787 heartbeater.cc:507] Master 127.2.78.62:37455 requested a full tablet report, sending...
I20250623 14:06:43.221379 4463 ts_manager.cc:194] Registered new tserver with Master: d2fe7a584d0c439fb4303e4d1972ec78 (127.2.78.2:45513)
I20250623 14:06:43.222667 4463 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:37069
W20250623 14:06:43.510843 4791 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:43.511349 4791 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:43.511853 4791 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:43.543385 4791 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:43.544258 4791 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:06:43.578226 4791 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:39175
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:37455
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:43.579546 4791 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:43.581203 4791 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:43.596547 4798 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:44.225735 4787 heartbeater.cc:499] Master 127.2.78.62:37455 was elected leader, sending a full tablet report...
W20250623 14:06:44.997877 4796 debug-util.cc:398] Leaking SignalData structure 0x7b08000184e0 after lost signal to thread 4791
W20250623 14:06:45.292855 4791 thread.cc:640] OpenStack (cloud detector) Time spent creating pthread: real 1.699s user 0.630s sys 1.068s
W20250623 14:06:43.596534 4797 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:45.293236 4791 thread.cc:606] OpenStack (cloud detector) Time spent starting thread: real 1.699s user 0.630s sys 1.068s
W20250623 14:06:45.295202 4800 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:45.297964 4799 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1700 milliseconds
I20250623 14:06:45.297987 4791 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:06:45.299129 4791 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:45.301182 4791 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:45.302521 4791 hybrid_clock.cc:648] HybridClock initialized: now 1750687605302462 us; error 68 us; skew 500 ppm
I20250623 14:06:45.303305 4791 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:45.309353 4791 webserver.cc:469] Webserver started at http://127.2.78.3:46233/ using document root <none> and password file <none>
I20250623 14:06:45.310292 4791 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:45.310506 4791 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:45.310952 4791 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:45.315263 4791 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "5c01f66532064a44bcf776410c73bc1f"
format_stamp: "Formatted at 2025-06-23 14:06:45 on dist-test-slave-stbh"
I20250623 14:06:45.316325 4791 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "5c01f66532064a44bcf776410c73bc1f"
format_stamp: "Formatted at 2025-06-23 14:06:45 on dist-test-slave-stbh"
I20250623 14:06:45.323788 4791 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.004s
I20250623 14:06:45.329376 4808 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:45.330413 4791 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250623 14:06:45.330754 4791 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "5c01f66532064a44bcf776410c73bc1f"
format_stamp: "Formatted at 2025-06-23 14:06:45 on dist-test-slave-stbh"
I20250623 14:06:45.331069 4791 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:45.378057 4791 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:45.379490 4791 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:45.379964 4791 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:45.382431 4791 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:45.386420 4791 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:45.386646 4791 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:45.386893 4791 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:45.387121 4791 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.001s
I20250623 14:06:45.528636 4791 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:35181
I20250623 14:06:45.528790 4921 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:35181 every 8 connection(s)
I20250623 14:06:45.531275 4791 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:06:45.541047 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4791
I20250623 14:06:45.541831 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250623 14:06:45.553723 4922 heartbeater.cc:344] Connected to a master server at 127.2.78.62:37455
I20250623 14:06:45.554174 4922 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:45.555122 4922 heartbeater.cc:507] Master 127.2.78.62:37455 requested a full tablet report, sending...
I20250623 14:06:45.557070 4463 ts_manager.cc:194] Registered new tserver with Master: 5c01f66532064a44bcf776410c73bc1f (127.2.78.3:35181)
I20250623 14:06:45.558308 4463 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:33203
I20250623 14:06:45.562266 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:45.596606 4463 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59762:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250623 14:06:45.615486 4463 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250623 14:06:45.670856 4856 tablet_service.cc:1468] Processing CreateTablet for tablet 3d6b12584f914bdc9a9b6a3d9a8fd2e9 (DEFAULT_TABLE table=TestTable [id=fd2de7ddccbe49089e573b3cd49a481e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:45.672852 4856 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3d6b12584f914bdc9a9b6a3d9a8fd2e9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:45.677706 4721 tablet_service.cc:1468] Processing CreateTablet for tablet 3d6b12584f914bdc9a9b6a3d9a8fd2e9 (DEFAULT_TABLE table=TestTable [id=fd2de7ddccbe49089e573b3cd49a481e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:45.678177 4587 tablet_service.cc:1468] Processing CreateTablet for tablet 3d6b12584f914bdc9a9b6a3d9a8fd2e9 (DEFAULT_TABLE table=TestTable [id=fd2de7ddccbe49089e573b3cd49a481e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:45.679396 4721 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3d6b12584f914bdc9a9b6a3d9a8fd2e9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:45.680855 4587 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3d6b12584f914bdc9a9b6a3d9a8fd2e9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:45.700836 4942 tablet_bootstrap.cc:492] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: Bootstrap starting.
I20250623 14:06:45.703315 4943 tablet_bootstrap.cc:492] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: Bootstrap starting.
I20250623 14:06:45.708853 4942 tablet_bootstrap.cc:654] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:45.711133 4944 tablet_bootstrap.cc:492] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: Bootstrap starting.
I20250623 14:06:45.711191 4943 tablet_bootstrap.cc:654] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:45.711242 4942 log.cc:826] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:45.713630 4943 log.cc:826] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:45.717073 4942 tablet_bootstrap.cc:492] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: No bootstrap required, opened a new log
I20250623 14:06:45.717674 4942 ts_tablet_manager.cc:1397] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: Time spent bootstrapping tablet: real 0.017s user 0.008s sys 0.008s
I20250623 14:06:45.718791 4944 tablet_bootstrap.cc:654] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:45.719074 4943 tablet_bootstrap.cc:492] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: No bootstrap required, opened a new log
I20250623 14:06:45.719602 4943 ts_tablet_manager.cc:1397] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: Time spent bootstrapping tablet: real 0.017s user 0.011s sys 0.005s
I20250623 14:06:45.720808 4944 log.cc:826] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:45.725561 4944 tablet_bootstrap.cc:492] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: No bootstrap required, opened a new log
I20250623 14:06:45.726009 4944 ts_tablet_manager.cc:1397] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: Time spent bootstrapping tablet: real 0.015s user 0.009s sys 0.005s
I20250623 14:06:45.744514 4944 raft_consensus.cc:357] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.745246 4944 raft_consensus.cc:383] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:45.744750 4942 raft_consensus.cc:357] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.745510 4944 raft_consensus.cc:738] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5f3293eb65154372a10a4ce36c1b1f1a, State: Initialized, Role: FOLLOWER
I20250623 14:06:45.745710 4942 raft_consensus.cc:383] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:45.746037 4942 raft_consensus.cc:738] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2fe7a584d0c439fb4303e4d1972ec78, State: Initialized, Role: FOLLOWER
I20250623 14:06:45.746425 4944 consensus_queue.cc:260] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.746582 4943 raft_consensus.cc:357] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.746913 4942 consensus_queue.cc:260] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.747475 4943 raft_consensus.cc:383] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:45.747808 4943 raft_consensus.cc:738] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5c01f66532064a44bcf776410c73bc1f, State: Initialized, Role: FOLLOWER
I20250623 14:06:45.748857 4943 consensus_queue.cc:260] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.756526 4944 ts_tablet_manager.cc:1428] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: Time spent starting tablet: real 0.030s user 0.016s sys 0.012s
I20250623 14:06:45.759514 4942 ts_tablet_manager.cc:1428] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: Time spent starting tablet: real 0.041s user 0.025s sys 0.012s
I20250623 14:06:45.761307 4922 heartbeater.cc:499] Master 127.2.78.62:37455 was elected leader, sending a full tablet report...
I20250623 14:06:45.762527 4943 ts_tablet_manager.cc:1428] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: Time spent starting tablet: real 0.043s user 0.039s sys 0.004s
W20250623 14:06:45.785948 4923 tablet.cc:2378] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:45.796088 4950 raft_consensus.cc:491] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:06:45.796604 4950 raft_consensus.cc:513] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.799319 4950 leader_election.cc:290] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 5f3293eb65154372a10a4ce36c1b1f1a (127.2.78.1:35311), d2fe7a584d0c439fb4303e4d1972ec78 (127.2.78.2:45513)
I20250623 14:06:45.808568 4607 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d6b12584f914bdc9a9b6a3d9a8fd2e9" candidate_uuid: "5c01f66532064a44bcf776410c73bc1f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" is_pre_election: true
I20250623 14:06:45.809528 4607 raft_consensus.cc:2466] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5c01f66532064a44bcf776410c73bc1f in term 0.
I20250623 14:06:45.810556 4741 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d6b12584f914bdc9a9b6a3d9a8fd2e9" candidate_uuid: "5c01f66532064a44bcf776410c73bc1f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" is_pre_election: true
I20250623 14:06:45.810931 4810 leader_election.cc:304] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5c01f66532064a44bcf776410c73bc1f, 5f3293eb65154372a10a4ce36c1b1f1a; no voters:
I20250623 14:06:45.811224 4741 raft_consensus.cc:2466] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5c01f66532064a44bcf776410c73bc1f in term 0.
I20250623 14:06:45.811672 4950 raft_consensus.cc:2802] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:06:45.812034 4950 raft_consensus.cc:491] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:06:45.812315 4950 raft_consensus.cc:3058] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:45.816510 4950 raft_consensus.cc:513] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.817875 4950 leader_election.cc:290] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [CANDIDATE]: Term 1 election: Requested vote from peers 5f3293eb65154372a10a4ce36c1b1f1a (127.2.78.1:35311), d2fe7a584d0c439fb4303e4d1972ec78 (127.2.78.2:45513)
I20250623 14:06:45.818764 4607 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d6b12584f914bdc9a9b6a3d9a8fd2e9" candidate_uuid: "5c01f66532064a44bcf776410c73bc1f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5f3293eb65154372a10a4ce36c1b1f1a"
I20250623 14:06:45.818838 4741 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d6b12584f914bdc9a9b6a3d9a8fd2e9" candidate_uuid: "5c01f66532064a44bcf776410c73bc1f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2fe7a584d0c439fb4303e4d1972ec78"
I20250623 14:06:45.819171 4607 raft_consensus.cc:3058] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:45.819271 4741 raft_consensus.cc:3058] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:45.823644 4741 raft_consensus.cc:2466] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5c01f66532064a44bcf776410c73bc1f in term 1.
I20250623 14:06:45.823648 4607 raft_consensus.cc:2466] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5c01f66532064a44bcf776410c73bc1f in term 1.
I20250623 14:06:45.824469 4810 leader_election.cc:304] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5c01f66532064a44bcf776410c73bc1f, 5f3293eb65154372a10a4ce36c1b1f1a; no voters:
I20250623 14:06:45.825099 4950 raft_consensus.cc:2802] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:45.826619 4950 raft_consensus.cc:695] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [term 1 LEADER]: Becoming Leader. State: Replica: 5c01f66532064a44bcf776410c73bc1f, State: Running, Role: LEADER
I20250623 14:06:45.827643 4950 consensus_queue.cc:237] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } }
I20250623 14:06:45.838431 4461 catalog_manager.cc:5582] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f reported cstate change: term changed from 0 to 1, leader changed from <none> to 5c01f66532064a44bcf776410c73bc1f (127.2.78.3). New cstate: current_term: 1 leader_uuid: "5c01f66532064a44bcf776410c73bc1f" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5c01f66532064a44bcf776410c73bc1f" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 35181 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 } health_report { overall_health: UNKNOWN } } }
I20250623 14:06:45.876531 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:45.880086 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5f3293eb65154372a10a4ce36c1b1f1a to finish bootstrapping
I20250623 14:06:45.892355 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d2fe7a584d0c439fb4303e4d1972ec78 to finish bootstrapping
I20250623 14:06:45.903189 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5c01f66532064a44bcf776410c73bc1f to finish bootstrapping
W20250623 14:06:45.914681 4654 tablet.cc:2378] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5f3293eb65154372a10a4ce36c1b1f1a: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250623 14:06:45.954820 4788 tablet.cc:2378] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P d2fe7a584d0c439fb4303e4d1972ec78: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:06:46.214020 4950 consensus_queue.cc:1035] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [LEADER]: Connected to new peer: Peer: permanent_uuid: "5f3293eb65154372a10a4ce36c1b1f1a" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 35311 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:06:46.241187 4950 consensus_queue.cc:1035] T 3d6b12584f914bdc9a9b6a3d9a8fd2e9 P 5c01f66532064a44bcf776410c73bc1f [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2fe7a584d0c439fb4303e4d1972ec78" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 45513 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250623 14:06:47.621465 4461 server_base.cc:1130] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:59766
I20250623 14:06:48.652153 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4522
I20250623 14:06:48.678612 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4657
I20250623 14:06:48.705894 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4791
I20250623 14:06:48.731626 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4430
2025-06-23T14:06:48Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (11151 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250623 14:06:48.783142 2360 test_util.cc:276] Using random seed: -1190608626
I20250623 14:06:48.787227 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:06:48.787396 2360 ts_itest-base.cc:116] --------------
I20250623 14:06:48.787510 2360 ts_itest-base.cc:117] 3 tablet servers
I20250623 14:06:48.787619 2360 ts_itest-base.cc:118] 3 replicas per TS
I20250623 14:06:48.787732 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:06:48Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:06:48Z Disabled control of system clock
I20250623 14:06:48.826108 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40255
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:40255 with env {}
W20250623 14:06:49.129128 4987 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:49.129696 4987 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:49.130143 4987 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:49.161543 4987 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:06:49.161933 4987 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:49.162160 4987 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:06:49.162358 4987 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:06:49.198786 4987 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:40255
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40255
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:49.200130 4987 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:49.201736 4987 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:49.215745 4994 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:49.215811 4993 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:50.417912 4987 thread.cc:640] OpenStack (cloud detector) Time spent creating pthread: real 1.205s user 0.411s sys 0.792s
W20250623 14:06:50.418219 4987 thread.cc:606] OpenStack (cloud detector) Time spent starting thread: real 1.205s user 0.411s sys 0.792s
I20250623 14:06:50.418465 4987 server_base.cc:1048] running on GCE node
W20250623 14:06:50.418982 4996 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:50.420362 4987 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:50.423086 4987 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:50.424443 4987 hybrid_clock.cc:648] HybridClock initialized: now 1750687610424385 us; error 61 us; skew 500 ppm
I20250623 14:06:50.425279 4987 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:50.436864 4987 webserver.cc:469] Webserver started at http://127.2.78.62:44467/ using document root <none> and password file <none>
I20250623 14:06:50.437894 4987 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:50.438136 4987 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:50.438602 4987 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:50.443151 4987 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "143c3bc2d119440cbb3dff3b78eff53d"
format_stamp: "Formatted at 2025-06-23 14:06:50 on dist-test-slave-stbh"
I20250623 14:06:50.444257 4987 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "143c3bc2d119440cbb3dff3b78eff53d"
format_stamp: "Formatted at 2025-06-23 14:06:50 on dist-test-slave-stbh"
I20250623 14:06:50.452186 4987 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.002s sys 0.007s
I20250623 14:06:50.458102 5003 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:50.459376 4987 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250623 14:06:50.459721 4987 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "143c3bc2d119440cbb3dff3b78eff53d"
format_stamp: "Formatted at 2025-06-23 14:06:50 on dist-test-slave-stbh"
I20250623 14:06:50.460071 4987 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:50.537878 4987 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:50.539402 4987 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:50.539861 4987 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:50.611168 4987 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:40255
I20250623 14:06:50.611284 5054 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:40255 every 8 connection(s)
I20250623 14:06:50.613974 4987 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:06:50.617098 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 4987
I20250623 14:06:50.617621 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:06:50.620282 5055 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:50.645390 5055 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap starting.
I20250623 14:06:50.651213 5055 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:50.653002 5055 log.cc:826] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:50.657606 5055 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: No bootstrap required, opened a new log
I20250623 14:06:50.676162 5055 raft_consensus.cc:357] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:06:50.676894 5055 raft_consensus.cc:383] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:50.677162 5055 raft_consensus.cc:738] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Initialized, Role: FOLLOWER
I20250623 14:06:50.677851 5055 consensus_queue.cc:260] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:06:50.678346 5055 raft_consensus.cc:397] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:50.678601 5055 raft_consensus.cc:491] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:50.678854 5055 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:50.683064 5055 raft_consensus.cc:513] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:06:50.683820 5055 leader_election.cc:304] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 143c3bc2d119440cbb3dff3b78eff53d; no voters:
I20250623 14:06:50.685909 5055 leader_election.cc:290] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:50.686740 5060 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:50.689020 5060 raft_consensus.cc:695] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 1 LEADER]: Becoming Leader. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Running, Role: LEADER
I20250623 14:06:50.689913 5060 consensus_queue.cc:237] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:06:50.690963 5055 sys_catalog.cc:564] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:06:50.698237 5061 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:06:50.698213 5062 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: New leader 143c3bc2d119440cbb3dff3b78eff53d. Latest consensus state: current_term: 1 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:06:50.699043 5061 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:50.699124 5062 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:06:50.703074 5067 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:06:50.715827 5067 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:06:50.733700 5067 catalog_manager.cc:1349] Generated new cluster ID: dd11aa33191a4d66a13dfe050ac01c88
I20250623 14:06:50.734022 5067 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:06:50.753651 5067 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:06:50.755780 5067 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:06:50.773213 5067 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Generated new TSK 0
I20250623 14:06:50.774416 5067 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:06:50.791196 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40255
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250623 14:06:51.099184 5079 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:51.099699 5079 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:51.100198 5079 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:51.132381 5079 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:51.133252 5079 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:06:51.168859 5079 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:51.170200 5079 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:51.171833 5079 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:51.189635 5086 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:51.192654 5088 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:51.193308 5079 server_base.cc:1048] running on GCE node
W20250623 14:06:51.193066 5085 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:52.376858 5079 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:52.379657 5079 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:52.381093 5079 hybrid_clock.cc:648] HybridClock initialized: now 1750687612381028 us; error 71 us; skew 500 ppm
I20250623 14:06:52.382192 5079 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:52.390059 5079 webserver.cc:469] Webserver started at http://127.2.78.1:34851/ using document root <none> and password file <none>
I20250623 14:06:52.391290 5079 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:52.391573 5079 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:52.392190 5079 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:52.398715 5079 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "6916fcf9b530457ca60b224fc870be01"
format_stamp: "Formatted at 2025-06-23 14:06:52 on dist-test-slave-stbh"
I20250623 14:06:52.400177 5079 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "6916fcf9b530457ca60b224fc870be01"
format_stamp: "Formatted at 2025-06-23 14:06:52 on dist-test-slave-stbh"
I20250623 14:06:52.409518 5079 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.010s sys 0.000s
I20250623 14:06:52.417057 5095 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:52.418301 5079 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250623 14:06:52.418694 5079 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "6916fcf9b530457ca60b224fc870be01"
format_stamp: "Formatted at 2025-06-23 14:06:52 on dist-test-slave-stbh"
I20250623 14:06:52.419127 5079 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:52.488649 5079 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:52.490543 5079 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:52.491076 5079 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:52.494242 5079 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:52.499728 5079 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:52.500011 5079 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:52.500294 5079 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:52.500504 5079 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:52.635239 5079 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:41875
I20250623 14:06:52.635342 5207 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:41875 every 8 connection(s)
I20250623 14:06:52.637861 5079 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:06:52.641274 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5079
I20250623 14:06:52.641829 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:06:52.649663 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40255
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:06:52.662305 5208 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:06:52.662853 5208 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:52.664175 5208 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:06:52.667152 5020 ts_manager.cc:194] Registered new tserver with Master: 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
I20250623 14:06:52.669200 5020 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:44839
W20250623 14:06:52.955895 5212 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:52.956439 5212 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:52.956972 5212 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:52.988672 5212 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:52.989599 5212 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:06:53.024026 5212 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:53.025398 5212 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:53.027124 5212 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:53.043632 5218 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:53.673225 5208 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
W20250623 14:06:53.043707 5219 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:53.043833 5221 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:53.045989 5212 server_base.cc:1048] running on GCE node
I20250623 14:06:54.193463 5212 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:54.196353 5212 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:54.197816 5212 hybrid_clock.cc:648] HybridClock initialized: now 1750687614197780 us; error 51 us; skew 500 ppm
I20250623 14:06:54.198721 5212 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:54.205252 5212 webserver.cc:469] Webserver started at http://127.2.78.2:40613/ using document root <none> and password file <none>
I20250623 14:06:54.206233 5212 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:54.206423 5212 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:54.206825 5212 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:54.211395 5212 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "7e49e301bb5f4e74937085e1a83fb792"
format_stamp: "Formatted at 2025-06-23 14:06:54 on dist-test-slave-stbh"
I20250623 14:06:54.212507 5212 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "7e49e301bb5f4e74937085e1a83fb792"
format_stamp: "Formatted at 2025-06-23 14:06:54 on dist-test-slave-stbh"
I20250623 14:06:54.219997 5212 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.002s sys 0.007s
I20250623 14:06:54.225647 5228 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:54.226719 5212 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250623 14:06:54.227026 5212 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "7e49e301bb5f4e74937085e1a83fb792"
format_stamp: "Formatted at 2025-06-23 14:06:54 on dist-test-slave-stbh"
I20250623 14:06:54.227381 5212 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:54.286654 5212 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:54.288100 5212 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:54.288522 5212 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:54.291023 5212 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:54.295148 5212 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:54.295349 5212 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:54.295615 5212 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:54.295776 5212 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:54.429915 5212 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:33299
I20250623 14:06:54.430087 5340 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:33299 every 8 connection(s)
I20250623 14:06:54.434060 5212 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:06:54.436774 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5212
I20250623 14:06:54.437387 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250623 14:06:54.446089 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40255
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:06:54.462745 5341 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:06:54.463199 5341 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:54.464265 5341 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:06:54.466545 5020 ts_manager.cc:194] Registered new tserver with Master: 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299)
I20250623 14:06:54.468199 5020 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:58287
W20250623 14:06:54.762513 5345 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:06:54.763032 5345 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:06:54.763538 5345 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:06:54.795858 5345 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:06:54.797361 5345 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:06:54.834842 5345 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:06:54.836100 5345 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:06:54.837688 5345 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:06:54.854673 5352 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:06:55.471951 5341 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
W20250623 14:06:54.854705 5351 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:54.854854 5354 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:06:56.022084 5353 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:06:56.022197 5345 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:06:56.026190 5345 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:06:56.028872 5345 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:06:56.030309 5345 hybrid_clock.cc:648] HybridClock initialized: now 1750687616030260 us; error 83 us; skew 500 ppm
I20250623 14:06:56.031119 5345 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:06:56.037698 5345 webserver.cc:469] Webserver started at http://127.2.78.3:40601/ using document root <none> and password file <none>
I20250623 14:06:56.038720 5345 fs_manager.cc:362] Metadata directory not provided
I20250623 14:06:56.038913 5345 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:06:56.039376 5345 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:06:56.043870 5345 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "b940cd7f3f124d68bcba419c0b50f589"
format_stamp: "Formatted at 2025-06-23 14:06:56 on dist-test-slave-stbh"
I20250623 14:06:56.045037 5345 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "b940cd7f3f124d68bcba419c0b50f589"
format_stamp: "Formatted at 2025-06-23 14:06:56 on dist-test-slave-stbh"
I20250623 14:06:56.052469 5345 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.010s sys 0.000s
I20250623 14:06:56.058154 5362 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:56.059159 5345 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250623 14:06:56.059480 5345 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "b940cd7f3f124d68bcba419c0b50f589"
format_stamp: "Formatted at 2025-06-23 14:06:56 on dist-test-slave-stbh"
I20250623 14:06:56.059793 5345 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:06:56.112883 5345 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:06:56.114380 5345 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:06:56.114941 5345 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:06:56.117575 5345 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:06:56.121606 5345 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:06:56.121846 5345 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250623 14:06:56.122130 5345 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:06:56.122295 5345 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:06:56.254949 5345 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:38593
I20250623 14:06:56.255088 5474 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:38593 every 8 connection(s)
I20250623 14:06:56.257436 5345 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:06:56.268227 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5345
I20250623 14:06:56.268821 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250623 14:06:56.278326 5475 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:06:56.278774 5475 heartbeater.cc:461] Registering TS with master...
I20250623 14:06:56.279733 5475 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:06:56.281883 5020 ts_manager.cc:194] Registered new tserver with Master: b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:06:56.283068 5020 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:42407
I20250623 14:06:56.289203 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:56.323170 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:06:56.323474 2360 test_util.cc:276] Using random seed: -1183068286
I20250623 14:06:56.363296 5020 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34512:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250623 14:06:56.404767 5410 tablet_service.cc:1468] Processing CreateTablet for tablet 85d08d41f0684b42a69070fc40c4c47c (DEFAULT_TABLE table=TestTable [id=a53f66611b7241ee8f17541017ddf843]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:56.406569 5410 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 85d08d41f0684b42a69070fc40c4c47c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:56.426450 5495 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap starting.
I20250623 14:06:56.434120 5495 tablet_bootstrap.cc:654] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:56.436120 5495 log.cc:826] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:56.440948 5495 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: No bootstrap required, opened a new log
I20250623 14:06:56.441444 5495 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Time spent bootstrapping tablet: real 0.016s user 0.006s sys 0.007s
I20250623 14:06:56.458727 5495 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:06:56.459394 5495 raft_consensus.cc:383] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:56.459651 5495 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Initialized, Role: FOLLOWER
I20250623 14:06:56.460409 5495 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:06:56.460949 5495 raft_consensus.cc:397] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:56.461233 5495 raft_consensus.cc:491] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:56.461578 5495 raft_consensus.cc:3058] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:56.465742 5495 raft_consensus.cc:513] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:06:56.466450 5495 leader_election.cc:304] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b940cd7f3f124d68bcba419c0b50f589; no voters:
I20250623 14:06:56.468122 5495 leader_election.cc:290] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:56.468504 5497 raft_consensus.cc:2802] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:56.471100 5497 raft_consensus.cc:695] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 LEADER]: Becoming Leader. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Running, Role: LEADER
I20250623 14:06:56.472009 5475 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
I20250623 14:06:56.472105 5497 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:06:56.473001 5495 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Time spent starting tablet: real 0.031s user 0.023s sys 0.009s
I20250623 14:06:56.484627 5019 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 reported cstate change: term changed from 0 to 1, leader changed from <none> to b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3). New cstate: current_term: 1 leader_uuid: "b940cd7f3f124d68bcba419c0b50f589" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: HEALTHY } } }
I20250623 14:06:56.697980 2360 test_util.cc:276] Using random seed: -1182693793
I20250623 14:06:56.719887 5017 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34526:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250623 14:06:56.748106 5143 tablet_service.cc:1468] Processing CreateTablet for tablet a1c4064e64534f98b92fa326520bca37 (DEFAULT_TABLE table=TestTable1 [id=490b378940214eb4a4b06ed4a46c4b2a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:56.749547 5143 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a1c4064e64534f98b92fa326520bca37. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:56.770013 5515 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap starting.
I20250623 14:06:56.775573 5515 tablet_bootstrap.cc:654] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:56.777290 5515 log.cc:826] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:56.782079 5515 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: No bootstrap required, opened a new log
I20250623 14:06:56.782521 5515 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Time spent bootstrapping tablet: real 0.013s user 0.008s sys 0.004s
I20250623 14:06:56.799609 5515 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:06:56.800213 5515 raft_consensus.cc:383] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:56.800447 5515 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Initialized, Role: FOLLOWER
I20250623 14:06:56.801122 5515 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:06:56.801622 5515 raft_consensus.cc:397] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:56.801909 5515 raft_consensus.cc:491] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:56.802202 5515 raft_consensus.cc:3058] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:56.806465 5515 raft_consensus.cc:513] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:06:56.807348 5515 leader_election.cc:304] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01; no voters:
I20250623 14:06:56.809413 5515 leader_election.cc:290] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:56.809850 5517 raft_consensus.cc:2802] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:56.812414 5517 raft_consensus.cc:695] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 LEADER]: Becoming Leader. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Running, Role: LEADER
I20250623 14:06:56.813516 5517 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:06:56.814018 5515 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Time spent starting tablet: real 0.031s user 0.027s sys 0.004s
I20250623 14:06:56.824805 5017 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 reported cstate change: term changed from 0 to 1, leader changed from <none> to 6916fcf9b530457ca60b224fc870be01 (127.2.78.1). New cstate: current_term: 1 leader_uuid: "6916fcf9b530457ca60b224fc870be01" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } health_report { overall_health: HEALTHY } } }
I20250623 14:06:57.001515 2360 test_util.cc:276] Using random seed: -1182390268
I20250623 14:06:57.022605 5013 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34540:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250623 14:06:57.051203 5276 tablet_service.cc:1468] Processing CreateTablet for tablet 6b841f86f0284c8b985ece6b0e89a2ce (DEFAULT_TABLE table=TestTable2 [id=40c2ba80364649e4ab2f1129ff0fff0c]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:06:57.052649 5276 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6b841f86f0284c8b985ece6b0e89a2ce. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:06:57.071604 5535 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:06:57.077500 5535 tablet_bootstrap.cc:654] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Neither blocks nor log segments found. Creating new log.
I20250623 14:06:57.079252 5535 log.cc:826] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Log is configured to *not* fsync() on all Append() calls
I20250623 14:06:57.083668 5535 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: No bootstrap required, opened a new log
I20250623 14:06:57.084105 5535 ts_tablet_manager.cc:1397] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.013s user 0.005s sys 0.006s
I20250623 14:06:57.101810 5535 raft_consensus.cc:357] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:06:57.102418 5535 raft_consensus.cc:383] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:06:57.102663 5535 raft_consensus.cc:738] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: FOLLOWER
I20250623 14:06:57.103364 5535 consensus_queue.cc:260] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:06:57.103978 5535 raft_consensus.cc:397] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:06:57.104264 5535 raft_consensus.cc:491] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:06:57.104554 5535 raft_consensus.cc:3058] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:06:57.108726 5535 raft_consensus.cc:513] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:06:57.109458 5535 leader_election.cc:304] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7e49e301bb5f4e74937085e1a83fb792; no voters:
I20250623 14:06:57.111641 5535 leader_election.cc:290] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:06:57.112056 5537 raft_consensus.cc:2802] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:06:57.115564 5535 ts_tablet_manager.cc:1428] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.031s user 0.022s sys 0.011s
I20250623 14:06:57.115516 5537 raft_consensus.cc:695] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 LEADER]: Becoming Leader. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Running, Role: LEADER
I20250623 14:06:57.116288 5537 consensus_queue.cc:237] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:06:57.127178 5013 catalog_manager.cc:5582] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: term changed from 0 to 1, leader changed from <none> to 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2). New cstate: current_term: 1 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } health_report { overall_health: HEALTHY } } }
I20250623 14:06:57.328979 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 4987
W20250623 14:06:57.500013 5475 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:40255 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:40255: connect: Connection refused (error 111)
W20250623 14:06:57.842418 5208 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:40255 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:40255: connect: Connection refused (error 111)
W20250623 14:06:58.144580 5341 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:40255 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:40255: connect: Connection refused (error 111)
I20250623 14:07:02.162755 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5079
I20250623 14:07:02.186550 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5212
I20250623 14:07:02.215816 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5345
I20250623 14:07:02.243640 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40255
--webserver_interface=127.2.78.62
--webserver_port=44467
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:40255 with env {}
W20250623 14:07:02.540839 5614 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:02.541435 5614 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:02.541934 5614 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:02.572490 5614 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:07:02.572825 5614 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:02.573117 5614 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:07:02.573351 5614 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:07:02.607858 5614 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:40255
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40255
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=44467
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:02.609212 5614 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:02.610944 5614 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:02.624491 5621 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:02.625451 5620 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:02.626583 5614 server_base.cc:1048] running on GCE node
W20250623 14:07:02.626271 5623 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:03.787541 5614 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:03.790241 5614 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:03.791618 5614 hybrid_clock.cc:648] HybridClock initialized: now 1750687623791570 us; error 56 us; skew 500 ppm
I20250623 14:07:03.792423 5614 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:03.802353 5614 webserver.cc:469] Webserver started at http://127.2.78.62:44467/ using document root <none> and password file <none>
I20250623 14:07:03.803287 5614 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:03.803606 5614 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:03.811419 5614 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.000s sys 0.004s
I20250623 14:07:03.815986 5630 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:03.817000 5614 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250623 14:07:03.817315 5614 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "143c3bc2d119440cbb3dff3b78eff53d"
format_stamp: "Formatted at 2025-06-23 14:06:50 on dist-test-slave-stbh"
I20250623 14:07:03.819190 5614 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:03.868877 5614 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:03.870421 5614 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:03.870868 5614 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:03.940200 5614 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:40255
I20250623 14:07:03.940299 5681 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:40255 every 8 connection(s)
I20250623 14:07:03.943089 5614 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:07:03.947716 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5614
I20250623 14:07:03.949132 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:41875
--local_ip_for_outbound_sockets=127.2.78.1
--tserver_master_addrs=127.2.78.62:40255
--webserver_port=34851
--webserver_interface=127.2.78.1
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:07:03.953298 5682 sys_catalog.cc:263] Verifying existing consensus state
I20250623 14:07:03.959152 5682 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap starting.
I20250623 14:07:03.975239 5682 log.cc:826] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:04.041968 5682 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:04.043138 5682 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap complete.
I20250623 14:07:04.065956 5682 raft_consensus.cc:357] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:04.068058 5682 raft_consensus.cc:738] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Initialized, Role: FOLLOWER
I20250623 14:07:04.068807 5682 consensus_queue.cc:260] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:04.069325 5682 raft_consensus.cc:397] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:04.069597 5682 raft_consensus.cc:491] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:04.069921 5682 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 2 FOLLOWER]: Advancing to term 3
I20250623 14:07:04.075589 5682 raft_consensus.cc:513] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:04.076211 5682 leader_election.cc:304] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 143c3bc2d119440cbb3dff3b78eff53d; no voters:
I20250623 14:07:04.078372 5682 leader_election.cc:290] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 3 election: Requested vote from peers
I20250623 14:07:04.078963 5686 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Leader election won for term 3
I20250623 14:07:04.081998 5686 raft_consensus.cc:695] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 LEADER]: Becoming Leader. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Running, Role: LEADER
I20250623 14:07:04.082813 5682 sys_catalog.cc:564] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:07:04.082875 5686 consensus_queue.cc:237] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:04.095290 5687 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:07:04.096335 5687 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:04.100561 5688 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: New leader 143c3bc2d119440cbb3dff3b78eff53d. Latest consensus state: current_term: 3 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:07:04.101192 5688 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:04.103374 5693 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:07:04.118865 5693 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=2c9f2fe916b34b4ab29fb394313c79b8]
I20250623 14:07:04.121490 5693 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=40c2ba80364649e4ab2f1129ff0fff0c]
I20250623 14:07:04.123915 5693 catalog_manager.cc:671] Loaded metadata for table TestTable [id=60c3f80fb7ec4179b6fb479f3f14bb48]
I20250623 14:07:04.134677 5693 tablet_loader.cc:96] loaded metadata for tablet 6b841f86f0284c8b985ece6b0e89a2ce (table TestTable2 [id=40c2ba80364649e4ab2f1129ff0fff0c])
I20250623 14:07:04.136440 5693 tablet_loader.cc:96] loaded metadata for tablet 85d08d41f0684b42a69070fc40c4c47c (table TestTable [id=60c3f80fb7ec4179b6fb479f3f14bb48])
I20250623 14:07:04.138074 5693 tablet_loader.cc:96] loaded metadata for tablet a1c4064e64534f98b92fa326520bca37 (table TestTable1 [id=2c9f2fe916b34b4ab29fb394313c79b8])
I20250623 14:07:04.139801 5693 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:07:04.147151 5693 catalog_manager.cc:1261] Loaded cluster ID: dd11aa33191a4d66a13dfe050ac01c88
I20250623 14:07:04.147578 5693 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:07:04.159672 5693 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:07:04.166471 5693 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Loaded TSK: 0
I20250623 14:07:04.168448 5693 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250623 14:07:04.328203 5684 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:04.328724 5684 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:04.329224 5684 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:04.360806 5684 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:04.361654 5684 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:07:04.396589 5684 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:41875
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=34851
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:04.397951 5684 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:04.399546 5684 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:04.416460 5710 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:04.419348 5712 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:04.420055 5684 server_base.cc:1048] running on GCE node
W20250623 14:07:04.419616 5709 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:05.588794 5684 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:05.591600 5684 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:05.593044 5684 hybrid_clock.cc:648] HybridClock initialized: now 1750687625592995 us; error 50 us; skew 500 ppm
I20250623 14:07:05.594157 5684 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:05.601513 5684 webserver.cc:469] Webserver started at http://127.2.78.1:34851/ using document root <none> and password file <none>
I20250623 14:07:05.602650 5684 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:05.602914 5684 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:05.612970 5684 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.001s sys 0.005s
I20250623 14:07:05.618650 5720 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:05.619686 5684 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250623 14:07:05.620048 5684 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "6916fcf9b530457ca60b224fc870be01"
format_stamp: "Formatted at 2025-06-23 14:06:52 on dist-test-slave-stbh"
I20250623 14:07:05.622599 5684 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:05.686203 5684 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:05.688024 5684 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:05.688541 5684 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:05.691617 5684 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:05.698403 5727 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250623 14:07:05.706066 5684 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250623 14:07:05.706357 5684 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.000s sys 0.002s
I20250623 14:07:05.706679 5684 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250623 14:07:05.713522 5684 ts_tablet_manager.cc:610] Registered 1 tablets
I20250623 14:07:05.713793 5684 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.000s sys 0.004s
I20250623 14:07:05.714186 5727 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap starting.
I20250623 14:07:05.780143 5727 log.cc:826] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:05.889940 5727 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:05.891268 5727 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap complete.
I20250623 14:07:05.892840 5727 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Time spent bootstrapping tablet: real 0.179s user 0.132s sys 0.041s
I20250623 14:07:05.909042 5684 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:41875
I20250623 14:07:05.909359 5834 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:41875 every 8 connection(s)
I20250623 14:07:05.912608 5684 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:07:05.913453 5727 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:07:05.916420 5727 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Initialized, Role: FOLLOWER
I20250623 14:07:05.917547 5727 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:07:05.918280 5727 raft_consensus.cc:397] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:05.918946 5727 raft_consensus.cc:491] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:05.919447 5727 raft_consensus.cc:3058] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:07:05.920383 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5684
I20250623 14:07:05.922444 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:33299
--local_ip_for_outbound_sockets=127.2.78.2
--tserver_master_addrs=127.2.78.62:40255
--webserver_port=40613
--webserver_interface=127.2.78.2
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:07:05.928658 5727 raft_consensus.cc:513] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:07:05.929641 5727 leader_election.cc:304] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01; no voters:
I20250623 14:07:05.938536 5727 leader_election.cc:290] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250623 14:07:05.940742 5839 raft_consensus.cc:2802] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:07:05.951653 5727 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Time spent starting tablet: real 0.059s user 0.044s sys 0.005s
I20250623 14:07:05.953130 5839 raft_consensus.cc:695] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 LEADER]: Becoming Leader. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Running, Role: LEADER
I20250623 14:07:05.954111 5839 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } }
I20250623 14:07:05.969278 5835 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:07:05.969679 5835 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:05.970633 5835 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:07:05.975723 5647 ts_manager.cc:194] Registered new tserver with Master: 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
I20250623 14:07:05.981305 5647 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 reported cstate change: term changed from 0 to 2, leader changed from <none> to 6916fcf9b530457ca60b224fc870be01 (127.2.78.1), VOTER 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) added. New cstate: current_term: 2 leader_uuid: "6916fcf9b530457ca60b224fc870be01" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } health_report { overall_health: HEALTHY } } }
W20250623 14:07:06.036537 5647 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet a1c4064e64534f98b92fa326520bca37 with cas_config_opid_index -1: no extra replica candidate found for tablet a1c4064e64534f98b92fa326520bca37 (table TestTable1 [id=2c9f2fe916b34b4ab29fb394313c79b8]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (1) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
I20250623 14:07:06.038796 5647 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:59205
I20250623 14:07:06.042932 5835 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
W20250623 14:07:06.277734 5840 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:06.278286 5840 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:06.278800 5840 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:06.309265 5840 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:06.310153 5840 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:07:06.344673 5840 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:33299
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=40613
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:06.346048 5840 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:06.347772 5840 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:06.364010 5854 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:06.364810 5857 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:06.365127 5855 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:06.365852 5840 server_base.cc:1048] running on GCE node
I20250623 14:07:07.508543 5840 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:07.510761 5840 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:07.512094 5840 hybrid_clock.cc:648] HybridClock initialized: now 1750687627512072 us; error 43 us; skew 500 ppm
I20250623 14:07:07.512871 5840 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:07.518988 5840 webserver.cc:469] Webserver started at http://127.2.78.2:40613/ using document root <none> and password file <none>
I20250623 14:07:07.519855 5840 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:07.520052 5840 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:07.527760 5840 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.003s
I20250623 14:07:07.532222 5864 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:07.533280 5840 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250623 14:07:07.533560 5840 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "7e49e301bb5f4e74937085e1a83fb792"
format_stamp: "Formatted at 2025-06-23 14:06:54 on dist-test-slave-stbh"
I20250623 14:07:07.535459 5840 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:07.583799 5840 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:07.585368 5840 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:07.585800 5840 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:07.588160 5840 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:07.593643 5871 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250623 14:07:07.601061 5840 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250623 14:07:07.601285 5840 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.001s
I20250623 14:07:07.601511 5840 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250623 14:07:07.606091 5840 ts_tablet_manager.cc:610] Registered 1 tablets
I20250623 14:07:07.606266 5840 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.000s sys 0.005s
I20250623 14:07:07.606619 5871 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:07:07.676623 5871 log.cc:826] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:07.757028 5871 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:07.757869 5871 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap complete.
I20250623 14:07:07.759178 5871 ts_tablet_manager.cc:1397] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.153s user 0.113s sys 0.035s
I20250623 14:07:07.774451 5871 raft_consensus.cc:357] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:07.776443 5871 raft_consensus.cc:738] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: FOLLOWER
I20250623 14:07:07.777231 5871 consensus_queue.cc:260] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:07.777709 5871 raft_consensus.cc:397] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:07.778033 5871 raft_consensus.cc:491] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:07.778293 5871 raft_consensus.cc:3058] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:07:07.784160 5871 raft_consensus.cc:513] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:07.784915 5871 leader_election.cc:304] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7e49e301bb5f4e74937085e1a83fb792; no voters:
I20250623 14:07:07.787070 5871 leader_election.cc:290] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250623 14:07:07.787518 5976 raft_consensus.cc:2802] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:07:07.791496 5871 ts_tablet_manager.cc:1428] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.032s user 0.027s sys 0.005s
I20250623 14:07:07.791496 5976 raft_consensus.cc:695] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEADER]: Becoming Leader. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Running, Role: LEADER
I20250623 14:07:07.792574 5976 consensus_queue.cc:237] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:07.793524 5840 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:33299
I20250623 14:07:07.793769 5981 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:33299 every 8 connection(s)
I20250623 14:07:07.795938 5840 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:07:07.805183 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5840
I20250623 14:07:07.806803 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:38593
--local_ip_for_outbound_sockets=127.2.78.3
--tserver_master_addrs=127.2.78.62:40255
--webserver_port=40601
--webserver_interface=127.2.78.3
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:07:07.819588 5983 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:07:07.820094 5983 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:07.821280 5983 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:07:07.825158 5647 ts_manager.cc:194] Registered new tserver with Master: 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299)
I20250623 14:07:07.826200 5647 catalog_manager.cc:5582] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } health_report { overall_health: HEALTHY } } }
I20250623 14:07:07.835628 5647 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:47117
I20250623 14:07:07.838936 5983 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
W20250623 14:07:08.105409 5988 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:08.105973 5988 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:08.106467 5988 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:08.137352 5988 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:08.138456 5988 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:07:08.173333 5988 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:38593
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=40601
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:08.174620 5988 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:08.176159 5988 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:08.191761 5999 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:08.349737 5790 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:08.359658 6008 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 LEADER]: Committing config change with OpId 2.8: config changed from index -1 to 8, NON_VOTER 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) added. New config: { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } } }
I20250623 14:07:08.371829 5634 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet a1c4064e64534f98b92fa326520bca37 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 8)
I20250623 14:07:08.381094 5647 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 reported cstate change: config changed from index -1 to 8, NON_VOTER 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) added. New cstate: current_term: 2 leader_uuid: "6916fcf9b530457ca60b224fc870be01" committed_config { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250623 14:07:08.391692 5722 consensus_peers.cc:487] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 -> Peer 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299): Couldn't send request to peer 7e49e301bb5f4e74937085e1a83fb792. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: a1c4064e64534f98b92fa326520bca37. This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:08.827159 6014 ts_tablet_manager.cc:927] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Initiating tablet copy from peer 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
I20250623 14:07:08.829869 6014 tablet_copy_client.cc:323] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Beginning tablet copy session from remote peer at address 127.2.78.1:41875
I20250623 14:07:08.843642 5810 tablet_copy_service.cc:140] P 6916fcf9b530457ca60b224fc870be01: Received BeginTabletCopySession request for tablet a1c4064e64534f98b92fa326520bca37 from peer 7e49e301bb5f4e74937085e1a83fb792 ({username='slave'} at 127.2.78.2:52991)
I20250623 14:07:08.844285 5810 tablet_copy_service.cc:161] P 6916fcf9b530457ca60b224fc870be01: Beginning new tablet copy session on tablet a1c4064e64534f98b92fa326520bca37 from peer 7e49e301bb5f4e74937085e1a83fb792 at {username='slave'} at 127.2.78.2:52991: session id = 7e49e301bb5f4e74937085e1a83fb792-a1c4064e64534f98b92fa326520bca37
I20250623 14:07:08.849607 5810 tablet_copy_source_session.cc:215] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Tablet Copy: opened 0 blocks and 1 log segments
I20250623 14:07:08.852857 6014 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a1c4064e64534f98b92fa326520bca37. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:08.863271 6014 tablet_copy_client.cc:806] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Starting download of 0 data blocks...
I20250623 14:07:08.863780 6014 tablet_copy_client.cc:670] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Starting download of 1 WAL segments...
I20250623 14:07:08.868845 6014 tablet_copy_client.cc:538] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250623 14:07:08.874177 6014 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:07:08.943199 6014 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:08.944008 6014 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap complete.
I20250623 14:07:08.945253 6014 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.071s user 0.066s sys 0.004s
I20250623 14:07:08.947300 6014 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:08.947890 6014 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: LEARNER
I20250623 14:07:08.948422 6014 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:08.961459 6014 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.016s user 0.010s sys 0.004s
I20250623 14:07:08.963136 5810 tablet_copy_service.cc:342] P 6916fcf9b530457ca60b224fc870be01: Request end of tablet copy session 7e49e301bb5f4e74937085e1a83fb792-a1c4064e64534f98b92fa326520bca37 received from {username='slave'} at 127.2.78.2:52991
I20250623 14:07:08.963546 5810 tablet_copy_service.cc:434] P 6916fcf9b530457ca60b224fc870be01: ending tablet copy session 7e49e301bb5f4e74937085e1a83fb792-a1c4064e64534f98b92fa326520bca37 on tablet a1c4064e64534f98b92fa326520bca37 with peer 7e49e301bb5f4e74937085e1a83fb792
I20250623 14:07:09.297554 5934 raft_consensus.cc:1215] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.7->[2.8-2.8] Dedup: 2.8->[]
W20250623 14:07:08.192204 6001 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:08.191771 5998 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:09.346915 6000 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:07:09.347049 5988 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:07:09.350731 5988 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:09.352866 5988 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:09.354264 5988 hybrid_clock.cc:648] HybridClock initialized: now 1750687629354224 us; error 57 us; skew 500 ppm
I20250623 14:07:09.355046 5988 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:09.364595 5988 webserver.cc:469] Webserver started at http://127.2.78.3:40601/ using document root <none> and password file <none>
I20250623 14:07:09.365506 5988 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:09.365720 5988 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:09.373366 5988 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250623 14:07:09.377588 6025 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:09.378544 5988 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.000s sys 0.003s
I20250623 14:07:09.378829 5988 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "b940cd7f3f124d68bcba419c0b50f589"
format_stamp: "Formatted at 2025-06-23 14:06:56 on dist-test-slave-stbh"
I20250623 14:07:09.380584 5988 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:09.426926 5988 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:09.428274 5988 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:09.428646 5988 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:09.431038 5988 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:09.436978 6032 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250623 14:07:09.444344 5988 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250623 14:07:09.444561 5988 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250623 14:07:09.444805 5988 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250623 14:07:09.449425 5988 ts_tablet_manager.cc:610] Registered 1 tablets
I20250623 14:07:09.449594 5988 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.000s sys 0.002s
I20250623 14:07:09.449993 6032 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap starting.
I20250623 14:07:09.503507 6032 log.cc:826] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:09.574332 6032 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:09.575459 6032 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap complete.
I20250623 14:07:09.576797 6032 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Time spent bootstrapping tablet: real 0.127s user 0.077s sys 0.047s
I20250623 14:07:09.591609 6032 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:07:09.594184 6032 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Initialized, Role: FOLLOWER
I20250623 14:07:09.594962 6032 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:07:09.595405 6032 raft_consensus.cc:397] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:09.595638 6032 raft_consensus.cc:491] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:09.595925 6032 raft_consensus.cc:3058] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:07:09.601038 6032 raft_consensus.cc:513] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:07:09.601658 6032 leader_election.cc:304] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b940cd7f3f124d68bcba419c0b50f589; no voters:
I20250623 14:07:09.604347 6032 leader_election.cc:290] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250623 14:07:09.604745 6120 raft_consensus.cc:2802] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:07:09.608724 6032 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Time spent starting tablet: real 0.032s user 0.023s sys 0.009s
I20250623 14:07:09.610751 6120 raft_consensus.cc:695] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEADER]: Becoming Leader. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Running, Role: LEADER
I20250623 14:07:09.611822 6120 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } }
I20250623 14:07:09.632560 5988 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:38593
I20250623 14:07:09.633055 6144 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:38593 every 8 connection(s)
I20250623 14:07:09.635293 5988 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:07:09.643910 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 5988
I20250623 14:07:09.663240 6145 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:07:09.663643 6145 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:09.664748 6145 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:07:09.667840 5646 ts_manager.cc:194] Registered new tserver with Master: b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:07:09.668736 5646 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 reported cstate change: term changed from 0 to 2, leader changed from <none> to b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3), VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) added. New cstate: current_term: 2 leader_uuid: "b940cd7f3f124d68bcba419c0b50f589" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: HEALTHY } } }
I20250623 14:07:09.679872 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:07:09.681248 5646 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:60231
I20250623 14:07:09.685437 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:07:09.686267 6145 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
W20250623 14:07:09.689186 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250623 14:07:09.695459 6095 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } }
I20250623 14:07:09.698566 6124 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEADER]: Committing config change with OpId 2.8: config changed from index -1 to 8, NON_VOTER 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) added. New config: { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } }
I20250623 14:07:09.706418 5632 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 85d08d41f0684b42a69070fc40c4c47c with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250623 14:07:09.707847 6029 consensus_peers.cc:487] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 -> Peer 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875): Couldn't send request to peer 6916fcf9b530457ca60b224fc870be01. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 85d08d41f0684b42a69070fc40c4c47c. This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:09.709002 5647 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 reported cstate change: config changed from index -1 to 8, NON_VOTER 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) added. New cstate: current_term: 2 leader_uuid: "b940cd7f3f124d68bcba419c0b50f589" committed_config { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250623 14:07:09.717751 6095 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:09.720830 6120 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index 8 to 9, NON_VOTER 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) added. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } } }
W20250623 14:07:09.722363 6029 consensus_peers.cc:487] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 -> Peer 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875): Couldn't send request to peer 6916fcf9b530457ca60b224fc870be01. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 85d08d41f0684b42a69070fc40c4c47c. This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:09.727031 5632 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 85d08d41f0684b42a69070fc40c4c47c with cas_config_opid_index 8: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250623 14:07:09.730322 6027 consensus_peers.cc:487] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 -> Peer 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299): Couldn't send request to peer 7e49e301bb5f4e74937085e1a83fb792. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 85d08d41f0684b42a69070fc40c4c47c. This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:09.729972 5647 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 reported cstate change: config changed from index 8 to 9, NON_VOTER 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) added. New cstate: current_term: 2 leader_uuid: "b940cd7f3f124d68bcba419c0b50f589" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250623 14:07:09.803462 6020 raft_consensus.cc:1062] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: attempting to promote NON_VOTER 7e49e301bb5f4e74937085e1a83fb792 to VOTER
I20250623 14:07:09.804883 6020 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:09.808949 5934 raft_consensus.cc:1273] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Refusing update from remote peer 6916fcf9b530457ca60b224fc870be01: Log matching property violated. Preceding OpId in replica: term: 2 index: 8. Preceding OpId from leader: term: 2 index: 9. (index mismatch)
I20250623 14:07:09.810122 6159 consensus_queue.cc:1035] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Connected to new peer: Peer: permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 9, Last known committed idx: 8, Time since last communication: 0.000s
I20250623 14:07:09.816382 6020 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index 8 to 9, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:09.817883 5934 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Committing config change with OpId 2.9: config changed from index 8 to 9, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:09.828060 5647 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 reported cstate change: config changed from index 8 to 9, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "6916fcf9b530457ca60b224fc870be01" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250623 14:07:09.837694 5790 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: NON_VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: true } }
I20250623 14:07:09.841572 5934 raft_consensus.cc:1273] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Refusing update from remote peer 6916fcf9b530457ca60b224fc870be01: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250623 14:07:09.843111 6162 consensus_queue.cc:1035] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Connected to new peer: Peer: permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.001s
I20250623 14:07:09.848779 6020 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: NON_VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: true } } }
I20250623 14:07:09.849902 5934 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: NON_VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: true } } }
W20250623 14:07:09.851294 5722 consensus_peers.cc:487] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 -> Peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Couldn't send request to peer b940cd7f3f124d68bcba419c0b50f589. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: a1c4064e64534f98b92fa326520bca37. This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:09.855405 5634 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet a1c4064e64534f98b92fa326520bca37 with cas_config_opid_index 9: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250623 14:07:09.859026 5646 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: config changed from index 9 to 10, NON_VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) added. New cstate: current_term: 2 leader_uuid: "6916fcf9b530457ca60b224fc870be01" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: NON_VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: true } } }
I20250623 14:07:10.155473 6165 ts_tablet_manager.cc:927] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Initiating tablet copy from peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:07:10.157565 6165 tablet_copy_client.cc:323] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: tablet copy: Beginning tablet copy session from remote peer at address 127.2.78.3:38593
I20250623 14:07:10.159046 6115 tablet_copy_service.cc:140] P b940cd7f3f124d68bcba419c0b50f589: Received BeginTabletCopySession request for tablet 85d08d41f0684b42a69070fc40c4c47c from peer 6916fcf9b530457ca60b224fc870be01 ({username='slave'} at 127.2.78.1:45083)
I20250623 14:07:10.159473 6115 tablet_copy_service.cc:161] P b940cd7f3f124d68bcba419c0b50f589: Beginning new tablet copy session on tablet 85d08d41f0684b42a69070fc40c4c47c from peer 6916fcf9b530457ca60b224fc870be01 at {username='slave'} at 127.2.78.1:45083: session id = 6916fcf9b530457ca60b224fc870be01-85d08d41f0684b42a69070fc40c4c47c
I20250623 14:07:10.164896 6115 tablet_copy_source_session.cc:215] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Tablet Copy: opened 0 blocks and 1 log segments
I20250623 14:07:10.168097 6165 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 85d08d41f0684b42a69070fc40c4c47c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:10.178002 6165 tablet_copy_client.cc:806] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: tablet copy: Starting download of 0 data blocks...
I20250623 14:07:10.178478 6165 tablet_copy_client.cc:670] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: tablet copy: Starting download of 1 WAL segments...
I20250623 14:07:10.182164 6165 tablet_copy_client.cc:538] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250623 14:07:10.187630 6165 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Bootstrap starting.
I20250623 14:07:10.256177 6165 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:10.256799 6165 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Bootstrap complete.
I20250623 14:07:10.257226 6165 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Time spent bootstrapping tablet: real 0.070s user 0.066s sys 0.001s
I20250623 14:07:10.259138 6165 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:10.259591 6165 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Initialized, Role: LEARNER
I20250623 14:07:10.259994 6165 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:10.261329 6165 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Time spent starting tablet: real 0.004s user 0.001s sys 0.003s
I20250623 14:07:10.262704 6115 tablet_copy_service.cc:342] P b940cd7f3f124d68bcba419c0b50f589: Request end of tablet copy session 6916fcf9b530457ca60b224fc870be01-85d08d41f0684b42a69070fc40c4c47c received from {username='slave'} at 127.2.78.1:45083
I20250623 14:07:10.263044 6115 tablet_copy_service.cc:434] P b940cd7f3f124d68bcba419c0b50f589: ending tablet copy session 6916fcf9b530457ca60b224fc870be01-85d08d41f0684b42a69070fc40c4c47c on tablet 85d08d41f0684b42a69070fc40c4c47c with peer 6916fcf9b530457ca60b224fc870be01
I20250623 14:07:10.336308 6168 ts_tablet_manager.cc:927] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Initiating tablet copy from peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:07:10.337425 6168 tablet_copy_client.cc:323] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Beginning tablet copy session from remote peer at address 127.2.78.3:38593
I20250623 14:07:10.345211 6115 tablet_copy_service.cc:140] P b940cd7f3f124d68bcba419c0b50f589: Received BeginTabletCopySession request for tablet 85d08d41f0684b42a69070fc40c4c47c from peer 7e49e301bb5f4e74937085e1a83fb792 ({username='slave'} at 127.2.78.2:56791)
I20250623 14:07:10.345583 6115 tablet_copy_service.cc:161] P b940cd7f3f124d68bcba419c0b50f589: Beginning new tablet copy session on tablet 85d08d41f0684b42a69070fc40c4c47c from peer 7e49e301bb5f4e74937085e1a83fb792 at {username='slave'} at 127.2.78.2:56791: session id = 7e49e301bb5f4e74937085e1a83fb792-85d08d41f0684b42a69070fc40c4c47c
I20250623 14:07:10.349478 6115 tablet_copy_source_session.cc:215] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Tablet Copy: opened 0 blocks and 1 log segments
I20250623 14:07:10.351580 6168 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 85d08d41f0684b42a69070fc40c4c47c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:10.360966 6168 tablet_copy_client.cc:806] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Starting download of 0 data blocks...
I20250623 14:07:10.361421 6168 tablet_copy_client.cc:670] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Starting download of 1 WAL segments...
I20250623 14:07:10.364488 6168 tablet_copy_client.cc:538] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250623 14:07:10.369680 6168 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:07:10.399667 6172 ts_tablet_manager.cc:927] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Initiating tablet copy from peer 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
I20250623 14:07:10.401546 6172 tablet_copy_client.cc:323] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: tablet copy: Beginning tablet copy session from remote peer at address 127.2.78.1:41875
I20250623 14:07:10.403025 5810 tablet_copy_service.cc:140] P 6916fcf9b530457ca60b224fc870be01: Received BeginTabletCopySession request for tablet a1c4064e64534f98b92fa326520bca37 from peer b940cd7f3f124d68bcba419c0b50f589 ({username='slave'} at 127.2.78.3:58285)
I20250623 14:07:10.403405 5810 tablet_copy_service.cc:161] P 6916fcf9b530457ca60b224fc870be01: Beginning new tablet copy session on tablet a1c4064e64534f98b92fa326520bca37 from peer b940cd7f3f124d68bcba419c0b50f589 at {username='slave'} at 127.2.78.3:58285: session id = b940cd7f3f124d68bcba419c0b50f589-a1c4064e64534f98b92fa326520bca37
I20250623 14:07:10.407575 5810 tablet_copy_source_session.cc:215] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Tablet Copy: opened 0 blocks and 1 log segments
I20250623 14:07:10.410187 6172 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a1c4064e64534f98b92fa326520bca37. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:10.421984 6172 tablet_copy_client.cc:806] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: tablet copy: Starting download of 0 data blocks...
I20250623 14:07:10.422497 6172 tablet_copy_client.cc:670] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: tablet copy: Starting download of 1 WAL segments...
I20250623 14:07:10.426190 6172 tablet_copy_client.cc:538] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250623 14:07:10.431435 6172 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Bootstrap starting.
I20250623 14:07:10.440152 6168 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:10.440694 6168 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap complete.
I20250623 14:07:10.441098 6168 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.072s user 0.067s sys 0.004s
I20250623 14:07:10.442611 6168 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:10.443163 6168 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: LEARNER
I20250623 14:07:10.443567 6168 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: NON_VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: true } }
I20250623 14:07:10.446149 6168 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.005s user 0.008s sys 0.000s
I20250623 14:07:10.447904 6115 tablet_copy_service.cc:342] P b940cd7f3f124d68bcba419c0b50f589: Request end of tablet copy session 7e49e301bb5f4e74937085e1a83fb792-85d08d41f0684b42a69070fc40c4c47c received from {username='slave'} at 127.2.78.2:56791
I20250623 14:07:10.448215 6115 tablet_copy_service.cc:434] P b940cd7f3f124d68bcba419c0b50f589: ending tablet copy session 7e49e301bb5f4e74937085e1a83fb792-85d08d41f0684b42a69070fc40c4c47c on tablet 85d08d41f0684b42a69070fc40c4c47c with peer 7e49e301bb5f4e74937085e1a83fb792
I20250623 14:07:10.504516 6172 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:10.505125 6172 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Bootstrap complete.
I20250623 14:07:10.505510 6172 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Time spent bootstrapping tablet: real 0.074s user 0.060s sys 0.012s
I20250623 14:07:10.507161 6172 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: NON_VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: true } }
I20250623 14:07:10.507580 6172 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Initialized, Role: LEARNER
I20250623 14:07:10.507951 6172 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: NON_VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: true } }
I20250623 14:07:10.509406 6172 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Time spent starting tablet: real 0.004s user 0.000s sys 0.004s
I20250623 14:07:10.510903 5810 tablet_copy_service.cc:342] P 6916fcf9b530457ca60b224fc870be01: Request end of tablet copy session b940cd7f3f124d68bcba419c0b50f589-a1c4064e64534f98b92fa326520bca37 received from {username='slave'} at 127.2.78.3:58285
I20250623 14:07:10.511193 5810 tablet_copy_service.cc:434] P 6916fcf9b530457ca60b224fc870be01: ending tablet copy session b940cd7f3f124d68bcba419c0b50f589-a1c4064e64534f98b92fa326520bca37 on tablet a1c4064e64534f98b92fa326520bca37 with peer b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:10.611879 5633 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet a1c4064e64534f98b92fa326520bca37 with cas_config_opid_index 8: aborting the task: latest config opid_index 10; task opid_index 8
I20250623 14:07:10.693563 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 6916fcf9b530457ca60b224fc870be01 to finish bootstrapping
I20250623 14:07:10.705417 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 7e49e301bb5f4e74937085e1a83fb792 to finish bootstrapping
I20250623 14:07:10.716010 5934 raft_consensus.cc:1215] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250623 14:07:10.721908 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver b940cd7f3f124d68bcba419c0b50f589 to finish bootstrapping
I20250623 14:07:10.741969 5790 raft_consensus.cc:1215] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250623 14:07:10.927652 6095 raft_consensus.cc:1215] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
I20250623 14:07:10.997380 5770 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250623 14:07:10.999598 5914 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250623 14:07:11.006138 6075 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+-------------------+---------
143c3bc2d119440cbb3dff3b78eff53d | 127.2.78.62:40255 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.78.20:34617 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+------------------+---------+----------+----------------+-----------------
6916fcf9b530457ca60b224fc870be01 | 127.2.78.1:41875 | HEALTHY | <none> | 1 | 0
7e49e301bb5f4e74937085e1a83fb792 | 127.2.78.2:33299 | HEALTHY | <none> | 1 | 0
b940cd7f3f124d68bcba419c0b50f589 | 127.2.78.3:38593 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.2.78.1 | experimental | 127.2.78.1:41875
local_ip_for_outbound_sockets | 127.2.78.2 | experimental | 127.2.78.2:33299
local_ip_for_outbound_sockets | 127.2.78.3 | experimental | 127.2.78.3:38593
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.2.78.1:41875
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.2.78.2:33299
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.2.78.3:38593
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.78.20:34617 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.18.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250623 14:07:11.217193 2360 log_verifier.cc:126] Checking tablet 6b841f86f0284c8b985ece6b0e89a2ce
I20250623 14:07:11.245661 6208 raft_consensus.cc:1062] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: attempting to promote NON_VOTER 7e49e301bb5f4e74937085e1a83fb792 to VOTER
I20250623 14:07:11.247253 6208 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:11.249785 2360 log_verifier.cc:177] Verified matching terms for 8 ops in tablet 6b841f86f0284c8b985ece6b0e89a2ce
I20250623 14:07:11.250180 2360 log_verifier.cc:126] Checking tablet 85d08d41f0684b42a69070fc40c4c47c
I20250623 14:07:11.252648 5934 raft_consensus.cc:1273] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 LEARNER]: Refusing update from remote peer b940cd7f3f124d68bcba419c0b50f589: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250623 14:07:11.252974 5790 raft_consensus.cc:1273] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 LEARNER]: Refusing update from remote peer b940cd7f3f124d68bcba419c0b50f589: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250623 14:07:11.253983 6210 consensus_queue.cc:1035] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Connected to new peer: Peer: permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250623 14:07:11.254633 6208 consensus_queue.cc:1035] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250623 14:07:11.261214 6210 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:11.262748 5934 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:11.270756 6208 raft_consensus.cc:1062] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: attempting to promote NON_VOTER 6916fcf9b530457ca60b224fc870be01 to VOTER
I20250623 14:07:11.268929 5790 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 LEARNER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:11.272322 6208 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:11.272895 5647 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 reported cstate change: config changed from index 9 to 10, 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "b940cd7f3f124d68bcba419c0b50f589" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: NON_VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250623 14:07:11.285617 5933 raft_consensus.cc:1273] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Refusing update from remote peer b940cd7f3f124d68bcba419c0b50f589: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250623 14:07:11.289036 5789 raft_consensus.cc:1273] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 LEARNER]: Refusing update from remote peer b940cd7f3f124d68bcba419c0b50f589: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250623 14:07:11.290081 6213 consensus_queue.cc:1035] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Connected to new peer: Peer: permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
I20250623 14:07:11.293800 6208 consensus_queue.cc:1035] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
I20250623 14:07:11.296592 6210 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:11.297983 5933 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:11.309047 5788 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:11.322826 5646 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 reported cstate change: config changed from index 10 to 11, 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "b940cd7f3f124d68bcba419c0b50f589" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250623 14:07:11.361315 2360 log_verifier.cc:177] Verified matching terms for 11 ops in tablet 85d08d41f0684b42a69070fc40c4c47c
I20250623 14:07:11.361583 2360 log_verifier.cc:126] Checking tablet a1c4064e64534f98b92fa326520bca37
I20250623 14:07:11.429286 6199 raft_consensus.cc:1062] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: attempting to promote NON_VOTER b940cd7f3f124d68bcba419c0b50f589 to VOTER
I20250623 14:07:11.430704 6199 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:11.436164 6095 raft_consensus.cc:1273] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 LEARNER]: Refusing update from remote peer 6916fcf9b530457ca60b224fc870be01: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250623 14:07:11.436617 5933 raft_consensus.cc:1273] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Refusing update from remote peer 6916fcf9b530457ca60b224fc870be01: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250623 14:07:11.438871 6199 consensus_queue.cc:1035] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250623 14:07:11.439885 6219 consensus_queue.cc:1035] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [LEADER]: Connected to new peer: Peer: permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
I20250623 14:07:11.443590 2360 log_verifier.cc:177] Verified matching terms for 10 ops in tablet a1c4064e64534f98b92fa326520bca37
I20250623 14:07:11.444028 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5614
I20250623 14:07:11.447342 6199 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } } }
I20250623 14:07:11.448855 5933 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } } }
I20250623 14:07:11.450392 6095 raft_consensus.cc:2953] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } } }
W20250623 14:07:11.477982 6028 connection.cc:537] client connection to 127.2.78.62:40255 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250623 14:07:11.478024 5867 connection.cc:537] client connection to 127.2.78.62:40255 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250623 14:07:11.478825 2360 minidump.cc:252] Setting minidump size limit to 20M
W20250623 14:07:11.478910 5983 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:40255 (0 consecutive failures): Network error: Failed to send heartbeat to master: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250623 14:07:11.478978 6145 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:40255 (0 consecutive failures): Network error: Failed to send heartbeat to master: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250623 14:07:11.479568 5723 connection.cc:537] client connection to 127.2.78.62:40255 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250623 14:07:11.480316 2360 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
W20250623 14:07:11.480409 5835 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:40255 (0 consecutive failures): Network error: Failed to send heartbeat to master: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250623 14:07:11.481685 2360 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:11.494040 6228 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:11.494139 6227 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:11.494838 6230 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:11.495146 2360 server_base.cc:1048] running on GCE node
I20250623 14:07:11.588232 2360 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250623 14:07:11.588445 2360 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250623 14:07:11.588621 2360 hybrid_clock.cc:648] HybridClock initialized: now 1750687631588602 us; error 0 us; skew 500 ppm
I20250623 14:07:11.589272 2360 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:11.592113 2360 webserver.cc:469] Webserver started at http://0.0.0.0:34677/ using document root <none> and password file <none>
I20250623 14:07:11.592862 2360 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:11.593050 2360 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:11.598131 2360 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.002s sys 0.001s
I20250623 14:07:11.601542 6235 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:11.602492 2360 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250623 14:07:11.602779 2360 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "143c3bc2d119440cbb3dff3b78eff53d"
format_stamp: "Formatted at 2025-06-23 14:06:50 on dist-test-slave-stbh"
I20250623 14:07:11.604382 2360 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:11.616390 2360 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:11.617677 2360 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:11.618055 2360 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:11.626149 2360 sys_catalog.cc:263] Verifying existing consensus state
W20250623 14:07:11.629287 2360 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.2.78.62:40255 exists but no master address supplied!
I20250623 14:07:11.631422 2360 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap starting.
I20250623 14:07:11.692121 2360 log.cc:826] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:11.751627 2360 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap replayed 1/1 log segments. Stats: ops{read=29 overwritten=0 applied=29 ignored=0} inserts{seen=13 ignored=0} mutations{seen=20 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:11.752380 2360 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap complete.
I20250623 14:07:11.765240 2360 raft_consensus.cc:357] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:11.765837 2360 raft_consensus.cc:738] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Initialized, Role: FOLLOWER
I20250623 14:07:11.766546 2360 consensus_queue.cc:260] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 29, Last appended: 3.29, Last appended by leader: 29, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:11.766992 2360 raft_consensus.cc:397] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:11.767197 2360 raft_consensus.cc:491] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:11.767449 2360 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 3 FOLLOWER]: Advancing to term 4
I20250623 14:07:11.772543 2360 raft_consensus.cc:513] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:11.773198 2360 leader_election.cc:304] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 143c3bc2d119440cbb3dff3b78eff53d; no voters:
I20250623 14:07:11.774340 2360 leader_election.cc:290] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 4 election: Requested vote from peers
I20250623 14:07:11.774580 6242 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 4 FOLLOWER]: Leader election won for term 4
I20250623 14:07:11.776438 6242 raft_consensus.cc:695] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 4 LEADER]: Becoming Leader. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Running, Role: LEADER
I20250623 14:07:11.777212 6242 consensus_queue.cc:237] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 29, Committed index: 29, Last appended: 3.29, Last appended by leader: 29, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:11.783547 6243 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:07:11.784133 6243 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:11.784798 6244 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: New leader 143c3bc2d119440cbb3dff3b78eff53d. Latest consensus state: current_term: 4 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:07:11.785631 6244 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:11.810801 2360 tablet_replica.cc:331] stopping tablet replica
I20250623 14:07:11.811381 2360 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 4 LEADER]: Raft consensus shutting down.
I20250623 14:07:11.811777 2360 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 4 FOLLOWER]: Raft consensus is shut down!
I20250623 14:07:11.814077 2360 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250623 14:07:11.814607 2360 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250623 14:07:11.844771 2360 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250623 14:07:16.949935 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5684
I20250623 14:07:16.978526 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5840
I20250623 14:07:17.005784 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 5988
I20250623 14:07:17.036520 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40255
--webserver_interface=127.2.78.62
--webserver_port=44467
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:40255 with env {}
W20250623 14:07:17.330878 6316 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:17.331481 6316 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:17.331977 6316 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:17.362746 6316 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:07:17.363103 6316 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:17.363361 6316 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:07:17.363620 6316 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:07:17.397807 6316 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:40255
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40255
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=44467
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:17.399091 6316 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:17.400657 6316 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:17.415066 6323 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:17.416038 6325 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:17.415102 6322 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:17.416890 6316 server_base.cc:1048] running on GCE node
I20250623 14:07:18.573537 6316 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:18.576057 6316 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:18.577404 6316 hybrid_clock.cc:648] HybridClock initialized: now 1750687638577357 us; error 57 us; skew 500 ppm
I20250623 14:07:18.578259 6316 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:18.584265 6316 webserver.cc:469] Webserver started at http://127.2.78.62:44467/ using document root <none> and password file <none>
I20250623 14:07:18.585232 6316 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:18.585433 6316 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:18.592679 6316 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.004s sys 0.002s
I20250623 14:07:18.598636 6332 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:18.599578 6316 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.002s
I20250623 14:07:18.599885 6316 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "143c3bc2d119440cbb3dff3b78eff53d"
format_stamp: "Formatted at 2025-06-23 14:06:50 on dist-test-slave-stbh"
I20250623 14:07:18.601737 6316 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:18.650022 6316 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:18.651439 6316 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:18.651856 6316 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:18.720046 6316 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:40255
I20250623 14:07:18.720124 6383 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:40255 every 8 connection(s)
I20250623 14:07:18.722785 6316 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:07:18.727483 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 6316
I20250623 14:07:18.729409 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:41875
--local_ip_for_outbound_sockets=127.2.78.1
--tserver_master_addrs=127.2.78.62:40255
--webserver_port=34851
--webserver_interface=127.2.78.1
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:07:18.735993 6384 sys_catalog.cc:263] Verifying existing consensus state
I20250623 14:07:18.742746 6384 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap starting.
I20250623 14:07:18.752676 6384 log.cc:826] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:18.825702 6384 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap replayed 1/1 log segments. Stats: ops{read=33 overwritten=0 applied=33 ignored=0} inserts{seen=15 ignored=0} mutations{seen=22 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:18.826509 6384 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Bootstrap complete.
I20250623 14:07:18.844094 6384 raft_consensus.cc:357] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:18.846006 6384 raft_consensus.cc:738] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Initialized, Role: FOLLOWER
I20250623 14:07:18.846751 6384 consensus_queue.cc:260] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 33, Last appended: 5.33, Last appended by leader: 33, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:18.847208 6384 raft_consensus.cc:397] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:18.847441 6384 raft_consensus.cc:491] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:18.847698 6384 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 5 FOLLOWER]: Advancing to term 6
I20250623 14:07:18.852694 6384 raft_consensus.cc:513] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:18.853274 6384 leader_election.cc:304] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 143c3bc2d119440cbb3dff3b78eff53d; no voters:
I20250623 14:07:18.855329 6384 leader_election.cc:290] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [CANDIDATE]: Term 6 election: Requested vote from peers
I20250623 14:07:18.855882 6388 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 6 FOLLOWER]: Leader election won for term 6
I20250623 14:07:18.858659 6388 raft_consensus.cc:695] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [term 6 LEADER]: Becoming Leader. State: Replica: 143c3bc2d119440cbb3dff3b78eff53d, State: Running, Role: LEADER
I20250623 14:07:18.859606 6384 sys_catalog.cc:564] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:07:18.859436 6388 consensus_queue.cc:237] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 33, Committed index: 33, Last appended: 5.33, Last appended by leader: 33, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } }
I20250623 14:07:18.871055 6389 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:07:18.871680 6389 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:18.873015 6390 sys_catalog.cc:455] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: SysCatalogTable state changed. Reason: New leader 143c3bc2d119440cbb3dff3b78eff53d. Latest consensus state: current_term: 6 leader_uuid: "143c3bc2d119440cbb3dff3b78eff53d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "143c3bc2d119440cbb3dff3b78eff53d" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40255 } } }
I20250623 14:07:18.873585 6390 sys_catalog.cc:458] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:18.879894 6396 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:07:18.892643 6396 catalog_manager.cc:671] Loaded metadata for table TestTable [id=141ed96f42464f3f849c1ad214641564]
I20250623 14:07:18.894260 6396 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=2c9f2fe916b34b4ab29fb394313c79b8]
I20250623 14:07:18.895823 6396 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=40c2ba80364649e4ab2f1129ff0fff0c]
I20250623 14:07:18.903609 6396 tablet_loader.cc:96] loaded metadata for tablet 6b841f86f0284c8b985ece6b0e89a2ce (table TestTable2 [id=40c2ba80364649e4ab2f1129ff0fff0c])
I20250623 14:07:18.904939 6396 tablet_loader.cc:96] loaded metadata for tablet 85d08d41f0684b42a69070fc40c4c47c (table TestTable [id=141ed96f42464f3f849c1ad214641564])
I20250623 14:07:18.906337 6396 tablet_loader.cc:96] loaded metadata for tablet a1c4064e64534f98b92fa326520bca37 (table TestTable1 [id=2c9f2fe916b34b4ab29fb394313c79b8])
I20250623 14:07:18.907671 6396 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:07:18.913966 6396 catalog_manager.cc:1261] Loaded cluster ID: dd11aa33191a4d66a13dfe050ac01c88
I20250623 14:07:18.914249 6396 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:07:18.923085 6396 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:07:18.929270 6396 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 143c3bc2d119440cbb3dff3b78eff53d: Loaded TSK: 0
I20250623 14:07:18.930840 6396 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250623 14:07:19.065387 6386 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:19.065897 6386 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:19.066465 6386 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:19.096714 6386 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:19.097621 6386 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:07:19.131454 6386 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:41875
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=34851
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:19.132742 6386 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:19.134323 6386 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:19.151863 6411 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:20.550092 6410 debug-util.cc:398] Leaking SignalData structure 0x7b08000184e0 after lost signal to thread 6386
W20250623 14:07:20.900779 6386 thread.cc:640] OpenStack (cloud detector) Time spent creating pthread: real 1.754s user 0.580s sys 1.119s
W20250623 14:07:19.152307 6412 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:20.901211 6386 thread.cc:606] OpenStack (cloud detector) Time spent starting thread: real 1.754s user 0.580s sys 1.120s
W20250623 14:07:20.903121 6414 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:20.906216 6386 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250623 14:07:20.906210 6413 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1752 milliseconds
I20250623 14:07:20.907591 6386 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:20.909641 6386 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:20.911044 6386 hybrid_clock.cc:648] HybridClock initialized: now 1750687640910995 us; error 56 us; skew 500 ppm
I20250623 14:07:20.911815 6386 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:20.922308 6386 webserver.cc:469] Webserver started at http://127.2.78.1:34851/ using document root <none> and password file <none>
I20250623 14:07:20.923203 6386 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:20.923420 6386 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:20.931289 6386 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.004s sys 0.000s
I20250623 14:07:20.936342 6421 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:20.937388 6386 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250623 14:07:20.937683 6386 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "6916fcf9b530457ca60b224fc870be01"
format_stamp: "Formatted at 2025-06-23 14:06:52 on dist-test-slave-stbh"
I20250623 14:07:20.939610 6386 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:20.998962 6386 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:21.000470 6386 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:21.000909 6386 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:21.004082 6386 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:21.011006 6428 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250623 14:07:21.022544 6386 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250623 14:07:21.022774 6386 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.014s user 0.000s sys 0.003s
I20250623 14:07:21.023072 6386 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250623 14:07:21.028344 6428 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Bootstrap starting.
I20250623 14:07:21.033200 6386 ts_tablet_manager.cc:610] Registered 2 tablets
I20250623 14:07:21.033495 6386 ts_tablet_manager.cc:589] Time spent register tablets: real 0.010s user 0.007s sys 0.000s
I20250623 14:07:21.104863 6428 log.cc:826] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:21.224653 6386 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:41875
I20250623 14:07:21.224874 6535 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:41875 every 8 connection(s)
I20250623 14:07:21.228358 6386 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:07:21.234212 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 6386
I20250623 14:07:21.236583 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:33299
--local_ip_for_outbound_sockets=127.2.78.2
--tserver_master_addrs=127.2.78.62:40255
--webserver_port=40613
--webserver_interface=127.2.78.2
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:07:21.257474 6428 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:21.258540 6428 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Bootstrap complete.
I20250623 14:07:21.260277 6428 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Time spent bootstrapping tablet: real 0.232s user 0.167s sys 0.044s
I20250623 14:07:21.276417 6536 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:07:21.276890 6536 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:21.278139 6536 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:07:21.277478 6428 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:21.280244 6428 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Initialized, Role: FOLLOWER
I20250623 14:07:21.281013 6428 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:21.293176 6349 ts_manager.cc:194] Registered new tserver with Master: 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
I20250623 14:07:21.299335 6349 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 reported cstate change: config changed from index -1 to 11, term changed from 0 to 2, VOTER 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) added, VOTER 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2) added, VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) added. New cstate: current_term: 2 committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:21.303053 6349 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 reported cstate change: config changed from index 10 to 11, leader changed from 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) to <none>, b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) changed from NON_VOTER to VOTER. New cstate: current_term: 2 committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } } }
I20250623 14:07:21.315830 6428 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Time spent starting tablet: real 0.055s user 0.030s sys 0.023s
I20250623 14:07:21.316702 6428 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap starting.
I20250623 14:07:21.376482 6349 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:41485
I20250623 14:07:21.381062 6536 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
I20250623 14:07:21.461022 6428 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:21.461992 6428 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Bootstrap complete.
I20250623 14:07:21.463538 6428 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Time spent bootstrapping tablet: real 0.147s user 0.129s sys 0.015s
I20250623 14:07:21.465924 6428 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:21.466603 6428 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6916fcf9b530457ca60b224fc870be01, State: Initialized, Role: FOLLOWER
I20250623 14:07:21.467224 6428 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:21.469116 6428 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
W20250623 14:07:21.613119 6540 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:21.613641 6540 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:21.614168 6540 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:21.650355 6540 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:21.651181 6540 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:07:21.685397 6540 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:33299
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=40613
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:21.686686 6540 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:21.688279 6540 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:21.704975 6555 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:22.722102 6558 raft_consensus.cc:491] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:07:22.722643 6558 raft_consensus.cc:513] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:22.725497 6558 leader_election.cc:290] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593), 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299)
W20250623 14:07:22.731422 6423 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
W20250623 14:07:22.736181 6423 leader_election.cc:336] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
W20250623 14:07:22.736822 6423 leader_election.cc:336] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299): Network error: Client connection negotiation failed: client connection to 127.2.78.2:33299: connect: Connection refused (error 111)
I20250623 14:07:22.737187 6423 leader_election.cc:304] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01; no voters: 7e49e301bb5f4e74937085e1a83fb792, b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:22.738011 6558 raft_consensus.cc:2747] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
W20250623 14:07:21.706033 6552 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:21.707165 6553 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:21.707228 6540 server_base.cc:1048] running on GCE node
I20250623 14:07:22.870337 6540 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:22.872552 6540 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:22.873869 6540 hybrid_clock.cc:648] HybridClock initialized: now 1750687642873842 us; error 55 us; skew 500 ppm
I20250623 14:07:22.874616 6540 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:22.881247 6540 webserver.cc:469] Webserver started at http://127.2.78.2:40613/ using document root <none> and password file <none>
I20250623 14:07:22.882158 6540 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:22.882344 6540 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:22.889806 6540 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250623 14:07:22.894243 6566 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:22.895177 6540 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250623 14:07:22.895450 6540 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "7e49e301bb5f4e74937085e1a83fb792"
format_stamp: "Formatted at 2025-06-23 14:06:54 on dist-test-slave-stbh"
I20250623 14:07:22.897235 6540 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:22.951540 6540 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:22.952991 6540 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:22.953393 6540 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:22.955806 6540 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:22.961202 6573 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250623 14:07:22.976718 6540 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250623 14:07:22.976995 6540 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.017s user 0.002s sys 0.000s
I20250623 14:07:22.977277 6540 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250623 14:07:22.982435 6573 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:07:22.992780 6540 ts_tablet_manager.cc:610] Registered 3 tablets
I20250623 14:07:22.993047 6540 ts_tablet_manager.cc:589] Time spent register tablets: real 0.016s user 0.013s sys 0.000s
I20250623 14:07:23.055830 6573 log.cc:826] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:23.107960 6558 raft_consensus.cc:491] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:07:23.108433 6558 raft_consensus.cc:513] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:23.110416 6558 leader_election.cc:290] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299), b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
W20250623 14:07:23.119074 6423 leader_election.cc:336] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299): Network error: Client connection negotiation failed: client connection to 127.2.78.2:33299: connect: Connection refused (error 111)
W20250623 14:07:23.120498 6423 leader_election.cc:336] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
I20250623 14:07:23.120893 6423 leader_election.cc:304] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01; no voters: 7e49e301bb5f4e74937085e1a83fb792, b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:23.121635 6558 raft_consensus.cc:2747] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250623 14:07:23.137849 6573 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:23.138957 6573 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap complete.
I20250623 14:07:23.140213 6573 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.158s user 0.100s sys 0.054s
I20250623 14:07:23.157122 6573 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:23.159696 6573 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: FOLLOWER
I20250623 14:07:23.160545 6573 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:23.164116 6573 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.024s user 0.022s sys 0.003s
I20250623 14:07:23.164763 6573 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:07:23.175529 6540 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:33299
I20250623 14:07:23.175930 6681 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:33299 every 8 connection(s)
I20250623 14:07:23.178448 6540 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:07:23.188424 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 6540
I20250623 14:07:23.190487 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:38593
--local_ip_for_outbound_sockets=127.2.78.3
--tserver_master_addrs=127.2.78.62:40255
--webserver_port=40601
--webserver_interface=127.2.78.3
--builtin_ntp_servers=127.2.78.20:34617
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250623 14:07:23.204667 6682 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:07:23.205114 6682 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:23.206115 6682 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:07:23.209605 6348 ts_manager.cc:194] Registered new tserver with Master: 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2:33299)
I20250623 14:07:23.213196 6348 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:42645
I20250623 14:07:23.288844 6573 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:23.289537 6573 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap complete.
I20250623 14:07:23.290652 6573 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.126s user 0.104s sys 0.020s
I20250623 14:07:23.292229 6573 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:23.292644 6573 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: FOLLOWER
I20250623 14:07:23.293102 6573 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:23.294061 6682 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
I20250623 14:07:23.294620 6573 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250623 14:07:23.295373 6573 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap starting.
I20250623 14:07:23.381572 6573 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:23.382266 6573 tablet_bootstrap.cc:492] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Bootstrap complete.
I20250623 14:07:23.383416 6573 ts_tablet_manager.cc:1397] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Time spent bootstrapping tablet: real 0.088s user 0.074s sys 0.012s
I20250623 14:07:23.384948 6573 raft_consensus.cc:357] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:23.385282 6573 raft_consensus.cc:738] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Initialized, Role: FOLLOWER
I20250623 14:07:23.385741 6573 consensus_queue.cc:260] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:23.386287 6573 raft_consensus.cc:397] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:23.386525 6573 raft_consensus.cc:491] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:23.386811 6573 raft_consensus.cc:3058] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Advancing to term 3
I20250623 14:07:23.391700 6573 raft_consensus.cc:513] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:23.392283 6573 leader_election.cc:304] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7e49e301bb5f4e74937085e1a83fb792; no voters:
I20250623 14:07:23.392789 6573 leader_election.cc:290] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250623 14:07:23.392977 6673 raft_consensus.cc:2802] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 3 FOLLOWER]: Leader election won for term 3
I20250623 14:07:23.396561 6573 ts_tablet_manager.cc:1428] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792: Time spent starting tablet: real 0.013s user 0.006s sys 0.008s
I20250623 14:07:23.396659 6673 raft_consensus.cc:695] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [term 3 LEADER]: Becoming Leader. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Running, Role: LEADER
I20250623 14:07:23.397425 6673 consensus_queue.cc:237] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } }
I20250623 14:07:23.410593 6348 catalog_manager.cc:5582] T 6b841f86f0284c8b985ece6b0e89a2ce P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } health_report { overall_health: HEALTHY } } }
W20250623 14:07:23.533540 6686 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:23.534057 6686 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:23.534492 6686 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:23.565188 6686 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:23.565986 6686 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:07:23.600219 6686 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:34617
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:38593
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=40601
--tserver_master_addrs=127.2.78.62:40255
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:23.601437 6686 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:23.603056 6686 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:23.619207 6703 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:24.405083 6708 raft_consensus.cc:491] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:07:24.405494 6708 raft_consensus.cc:513] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:24.407532 6708 leader_election.cc:290] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593), 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
W20250623 14:07:24.424882 6568 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
W20250623 14:07:24.430626 6568 leader_election.cc:336] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
I20250623 14:07:24.439550 6491 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" candidate_uuid: "7e49e301bb5f4e74937085e1a83fb792" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "6916fcf9b530457ca60b224fc870be01" is_pre_election: true
I20250623 14:07:24.440712 6491 raft_consensus.cc:2466] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 7e49e301bb5f4e74937085e1a83fb792 in term 2.
I20250623 14:07:24.442714 6570 leader_election.cc:304] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01, 7e49e301bb5f4e74937085e1a83fb792; no voters: b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:24.443806 6708 raft_consensus.cc:2802] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250623 14:07:24.444125 6708 raft_consensus.cc:491] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:07:24.444370 6708 raft_consensus.cc:3058] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Advancing to term 3
I20250623 14:07:24.448594 6708 raft_consensus.cc:513] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:24.450016 6708 leader_election.cc:290] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: Requested vote from peers b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593), 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875)
I20250623 14:07:24.451217 6491 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" candidate_uuid: "7e49e301bb5f4e74937085e1a83fb792" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "6916fcf9b530457ca60b224fc870be01"
I20250623 14:07:24.452119 6491 raft_consensus.cc:3058] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Advancing to term 3
W20250623 14:07:24.453305 6568 leader_election.cc:336] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
I20250623 14:07:24.466473 6491 raft_consensus.cc:2466] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 7e49e301bb5f4e74937085e1a83fb792 in term 3.
I20250623 14:07:24.467844 6570 leader_election.cc:304] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01, 7e49e301bb5f4e74937085e1a83fb792; no voters: b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:24.468446 6708 raft_consensus.cc:2802] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 3 FOLLOWER]: Leader election won for term 3
I20250623 14:07:24.470196 6708 raft_consensus.cc:695] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 3 LEADER]: Becoming Leader. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Running, Role: LEADER
I20250623 14:07:24.470903 6708 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:24.477706 6348 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: term changed from 2 to 3, leader changed from <none> to 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2). New cstate: current_term: 3 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250623 14:07:24.565627 6708 raft_consensus.cc:491] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:07:24.566072 6708 raft_consensus.cc:513] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:24.567483 6708 leader_election.cc:290] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875), b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:07:24.568910 6491 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a1c4064e64534f98b92fa326520bca37" candidate_uuid: "7e49e301bb5f4e74937085e1a83fb792" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "6916fcf9b530457ca60b224fc870be01" is_pre_election: true
I20250623 14:07:24.569419 6491 raft_consensus.cc:2466] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 7e49e301bb5f4e74937085e1a83fb792 in term 2.
I20250623 14:07:24.570329 6570 leader_election.cc:304] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01, 7e49e301bb5f4e74937085e1a83fb792; no voters:
I20250623 14:07:24.570989 6708 raft_consensus.cc:2802] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250623 14:07:24.571333 6708 raft_consensus.cc:491] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:07:24.571664 6708 raft_consensus.cc:3058] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 2 FOLLOWER]: Advancing to term 3
W20250623 14:07:24.572000 6568 leader_election.cc:336] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
I20250623 14:07:24.575966 6708 raft_consensus.cc:513] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:24.577425 6708 leader_election.cc:290] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: Requested vote from peers 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875), b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:07:24.578181 6491 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a1c4064e64534f98b92fa326520bca37" candidate_uuid: "7e49e301bb5f4e74937085e1a83fb792" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "6916fcf9b530457ca60b224fc870be01"
I20250623 14:07:24.578658 6491 raft_consensus.cc:3058] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 2 FOLLOWER]: Advancing to term 3
W20250623 14:07:24.582219 6568 leader_election.cc:336] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111)
I20250623 14:07:24.583886 6491 raft_consensus.cc:2466] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 7e49e301bb5f4e74937085e1a83fb792 in term 3.
I20250623 14:07:24.584836 6570 leader_election.cc:304] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 6916fcf9b530457ca60b224fc870be01, 7e49e301bb5f4e74937085e1a83fb792; no voters: b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:24.585402 6708 raft_consensus.cc:2802] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 3 FOLLOWER]: Leader election won for term 3
I20250623 14:07:24.585815 6708 raft_consensus.cc:695] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [term 3 LEADER]: Becoming Leader. State: Replica: 7e49e301bb5f4e74937085e1a83fb792, State: Running, Role: LEADER
I20250623 14:07:24.586464 6708 consensus_queue.cc:237] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:24.595108 6348 catalog_manager.cc:5582] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: term changed from 2 to 3, leader changed from <none> to 7e49e301bb5f4e74937085e1a83fb792 (127.2.78.2). New cstate: current_term: 3 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
W20250623 14:07:23.619201 6702 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:23.619186 6705 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:23.621390 6686 server_base.cc:1048] running on GCE node
I20250623 14:07:24.816906 6686 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:24.819087 6686 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:24.820415 6686 hybrid_clock.cc:648] HybridClock initialized: now 1750687644820375 us; error 52 us; skew 500 ppm
I20250623 14:07:24.821208 6686 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:24.827157 6686 webserver.cc:469] Webserver started at http://127.2.78.3:40601/ using document root <none> and password file <none>
I20250623 14:07:24.828030 6686 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:24.828228 6686 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:24.835784 6686 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250623 14:07:24.840082 6724 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:24.841014 6686 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250623 14:07:24.841313 6686 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "b940cd7f3f124d68bcba419c0b50f589"
format_stamp: "Formatted at 2025-06-23 14:06:56 on dist-test-slave-stbh"
I20250623 14:07:24.843231 6686 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:24.896440 6491 raft_consensus.cc:1273] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Refusing update from remote peer 7e49e301bb5f4e74937085e1a83fb792: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250623 14:07:24.897888 6708 consensus_queue.cc:1035] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250623 14:07:24.905849 6686 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:24.907810 6686 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:24.908421 6686 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:24.919207 6686 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
W20250623 14:07:24.920836 6568 consensus_peers.cc:487] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 -> Peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Couldn't send request to peer b940cd7f3f124d68bcba419c0b50f589. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:24.928107 6737 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250623 14:07:24.933694 6636 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 3.12, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:24.937175 6491 raft_consensus.cc:1273] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Refusing update from remote peer 7e49e301bb5f4e74937085e1a83fb792: Log matching property violated. Preceding OpId in replica: term: 3 index: 12. Preceding OpId from leader: term: 3 index: 13. (index mismatch)
I20250623 14:07:24.938145 6708 consensus_queue.cc:1035] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250623 14:07:24.940725 6686 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250623 14:07:24.940935 6686 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.015s user 0.000s sys 0.002s
I20250623 14:07:24.941192 6686 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250623 14:07:24.943190 6713 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 3 LEADER]: Committing config change with OpId 3.13: config changed from index 11 to 13, VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) evicted. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:24.944329 6491 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Committing config change with OpId 3.13: config changed from index 11 to 13, VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) evicted. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:24.948794 6737 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap starting.
I20250623 14:07:24.953477 6686 ts_tablet_manager.cc:610] Registered 2 tablets
I20250623 14:07:24.953836 6686 ts_tablet_manager.cc:589] Time spent register tablets: real 0.013s user 0.013s sys 0.000s
I20250623 14:07:24.956213 6348 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 reported cstate change: config changed from index 11 to 13, VOTER b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3) evicted. New cstate: current_term: 3 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:24.959020 6334 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 85d08d41f0684b42a69070fc40c4c47c with cas_config_opid_index 11: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
W20250623 14:07:24.965998 6348 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 85d08d41f0684b42a69070fc40c4c47c on TS b940cd7f3f124d68bcba419c0b50f589: Not found: failed to reset TS proxy: Could not find TS for UUID b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:24.970116 6636 consensus_queue.cc:237] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 3.13, Last appended by leader: 11, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:24.972386 6713 raft_consensus.cc:2953] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 [term 3 LEADER]: Committing config change with OpId 3.14: config changed from index 13 to 14, VOTER 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) evicted. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } }
I20250623 14:07:24.980044 6334 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 85d08d41f0684b42a69070fc40c4c47c with cas_config_opid_index 13: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250623 14:07:24.983713 6349 catalog_manager.cc:5582] T 85d08d41f0684b42a69070fc40c4c47c P 7e49e301bb5f4e74937085e1a83fb792 reported cstate change: config changed from index 13 to 14, VOTER 6916fcf9b530457ca60b224fc870be01 (127.2.78.1) evicted. New cstate: current_term: 3 leader_uuid: "7e49e301bb5f4e74937085e1a83fb792" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250623 14:07:25.017936 6471 tablet_service.cc:1515] Processing DeleteTablet for tablet 85d08d41f0684b42a69070fc40c4c47c with delete_type TABLET_DATA_TOMBSTONED (TS 6916fcf9b530457ca60b224fc870be01 not found in new config with opid_index 14) from {username='slave'} at 127.0.0.1:32854
W20250623 14:07:25.020905 6334 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 85d08d41f0684b42a69070fc40c4c47c on TS b940cd7f3f124d68bcba419c0b50f589 failed: Not found: failed to reset TS proxy: Could not find TS for UUID b940cd7f3f124d68bcba419c0b50f589
I20250623 14:07:25.021282 6761 tablet_replica.cc:331] stopping tablet replica
I20250623 14:07:25.022228 6761 raft_consensus.cc:2241] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250623 14:07:25.022925 6761 raft_consensus.cc:2270] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250623 14:07:25.025017 6737 log.cc:826] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:25.026540 6761 ts_tablet_manager.cc:1905] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250623 14:07:25.042457 6761 ts_tablet_manager.cc:1918] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.13
I20250623 14:07:25.042948 6761 log.cc:1199] T 85d08d41f0684b42a69070fc40c4c47c P 6916fcf9b530457ca60b224fc870be01: Deleting WAL directory at /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/wals/85d08d41f0684b42a69070fc40c4c47c
I20250623 14:07:25.044562 6336 catalog_manager.cc:4928] TS 6916fcf9b530457ca60b224fc870be01 (127.2.78.1:41875): tablet 85d08d41f0684b42a69070fc40c4c47c (table TestTable [id=141ed96f42464f3f849c1ad214641564]) successfully deleted
W20250623 14:07:25.066818 6568 consensus_peers.cc:487] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 -> Peer b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): Couldn't send request to peer b940cd7f3f124d68bcba419c0b50f589. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.3:38593: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:25.107892 6491 raft_consensus.cc:1273] T a1c4064e64534f98b92fa326520bca37 P 6916fcf9b530457ca60b224fc870be01 [term 3 FOLLOWER]: Refusing update from remote peer 7e49e301bb5f4e74937085e1a83fb792: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250623 14:07:25.109556 6713 consensus_queue.cc:1035] T a1c4064e64534f98b92fa326520bca37 P 7e49e301bb5f4e74937085e1a83fb792 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250623 14:07:25.148075 6737 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:25.148808 6737 tablet_bootstrap.cc:492] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Bootstrap complete.
I20250623 14:07:25.150121 6737 ts_tablet_manager.cc:1397] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Time spent bootstrapping tablet: real 0.202s user 0.148s sys 0.047s
I20250623 14:07:25.161723 6686 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:38593
I20250623 14:07:25.161908 6847 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:38593 every 8 connection(s)
I20250623 14:07:25.164494 6686 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:07:25.163997 6737 raft_consensus.cc:357] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:25.166183 6737 raft_consensus.cc:738] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Initialized, Role: FOLLOWER
I20250623 14:07:25.167347 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 6686
I20250623 14:07:25.167282 6737 consensus_queue.cc:260] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } } peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } attrs { promote: false } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } }
I20250623 14:07:25.184827 6737 ts_tablet_manager.cc:1428] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Time spent starting tablet: real 0.034s user 0.022s sys 0.001s
I20250623 14:07:25.185688 6737 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Bootstrap starting.
I20250623 14:07:25.199733 6848 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40255
I20250623 14:07:25.200217 6848 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:25.201345 6848 heartbeater.cc:507] Master 127.2.78.62:40255 requested a full tablet report, sending...
I20250623 14:07:25.206476 6348 ts_manager.cc:194] Registered new tserver with Master: b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593)
I20250623 14:07:25.211266 6348 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:45697
I20250623 14:07:25.215960 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:07:25.216585 6848 heartbeater.cc:499] Master 127.2.78.62:40255 was elected leader, sending a full tablet report...
I20250623 14:07:25.220974 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
W20250623 14:07:25.224758 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250623 14:07:25.301690 6737 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:07:25.302395 6737 tablet_bootstrap.cc:492] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Bootstrap complete.
I20250623 14:07:25.303483 6737 ts_tablet_manager.cc:1397] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Time spent bootstrapping tablet: real 0.118s user 0.102s sys 0.011s
I20250623 14:07:25.305080 6737 raft_consensus.cc:357] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:25.305506 6737 raft_consensus.cc:738] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: b940cd7f3f124d68bcba419c0b50f589, State: Initialized, Role: FOLLOWER
I20250623 14:07:25.305996 6737 consensus_queue.cc:260] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6916fcf9b530457ca60b224fc870be01" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 41875 } } peers { permanent_uuid: "7e49e301bb5f4e74937085e1a83fb792" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 33299 } attrs { promote: false } } peers { permanent_uuid: "b940cd7f3f124d68bcba419c0b50f589" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 38593 } attrs { promote: false } }
I20250623 14:07:25.307150 6737 ts_tablet_manager.cc:1428] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589: Time spent starting tablet: real 0.003s user 0.004s sys 0.000s
I20250623 14:07:25.352635 6782 tablet_service.cc:1515] Processing DeleteTablet for tablet 85d08d41f0684b42a69070fc40c4c47c with delete_type TABLET_DATA_TOMBSTONED (TS b940cd7f3f124d68bcba419c0b50f589 not found in new config with opid_index 13) from {username='slave'} at 127.0.0.1:45458
I20250623 14:07:25.357403 6858 tablet_replica.cc:331] stopping tablet replica
I20250623 14:07:25.358078 6858 raft_consensus.cc:2241] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250623 14:07:25.358510 6858 raft_consensus.cc:2270] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250623 14:07:25.360550 6858 ts_tablet_manager.cc:1905] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250623 14:07:25.371430 6858 ts_tablet_manager.cc:1918] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.11
I20250623 14:07:25.371731 6858 log.cc:1199] T 85d08d41f0684b42a69070fc40c4c47c P b940cd7f3f124d68bcba419c0b50f589: Deleting WAL directory at /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/wals/85d08d41f0684b42a69070fc40c4c47c
I20250623 14:07:25.373263 6334 catalog_manager.cc:4928] TS b940cd7f3f124d68bcba419c0b50f589 (127.2.78.3:38593): tablet 85d08d41f0684b42a69070fc40c4c47c (table TestTable [id=141ed96f42464f3f849c1ad214641564]) successfully deleted
I20250623 14:07:25.534860 6802 raft_consensus.cc:3058] T a1c4064e64534f98b92fa326520bca37 P b940cd7f3f124d68bcba419c0b50f589 [term 2 FOLLOWER]: Advancing to term 3
W20250623 14:07:26.229635 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:27.233713 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:28.237637 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:29.241359 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:30.244680 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:30.946178 6380 debug-util.cc:398] Leaking SignalData structure 0x7b080006f060 after lost signal to thread 6317
W20250623 14:07:30.946921 6380 debug-util.cc:398] Leaking SignalData structure 0x7b080008cc60 after lost signal to thread 6383
W20250623 14:07:31.248261 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:32.252686 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:33.255985 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:34.259164 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:34.454291 6380 debug-util.cc:398] Leaking SignalData structure 0x7b0800087140 after lost signal to thread 6317
W20250623 14:07:34.454780 6380 debug-util.cc:398] Leaking SignalData structure 0x7b080006f340 after lost signal to thread 6383
W20250623 14:07:35.262472 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:36.266466 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:37.270188 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:38.273662 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:39.276876 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:40.280241 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:41.283661 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:42.286895 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:43.290357 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250623 14:07:44.293560 2360 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 85d08d41f0684b42a69070fc40c4c47c: tablet_id: "85d08d41f0684b42a69070fc40c4c47c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250623 14:07:45.296572 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 6386
I20250623 14:07:45.323202 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 6540
I20250623 14:07:45.351998 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 6686
I20250623 14:07:45.379282 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 6316
2025-06-23T14:07:45Z chronyd exiting
I20250623 14:07:45.431089 2360 test_util.cc:183] -----------------------------------------------
I20250623 14:07:45.431295 2360 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1750687545053178-2360-0
[ FAILED ] AdminCliTest.TestRebuildTables (56651 ms)
[----------] 5 tests from AdminCliTest (120311 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250623 14:07:45.435225 2360 test_util.cc:276] Using random seed: -1133956545
I20250623 14:07:45.439671 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:07:45.439843 2360 ts_itest-base.cc:116] --------------
I20250623 14:07:45.440007 2360 ts_itest-base.cc:117] 5 tablet servers
I20250623 14:07:45.440145 2360 ts_itest-base.cc:118] 3 replicas per TS
I20250623 14:07:45.440291 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:07:45Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:07:45Z Disabled control of system clock
I20250623 14:07:45.482936 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:36099
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:35903
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:36099
--raft_prepare_replacement_before_eviction=true with env {}
W20250623 14:07:45.780086 6883 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:45.780603 6883 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:45.780990 6883 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:45.810499 6883 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250623 14:07:45.810863 6883 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:07:45.811092 6883 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:45.811296 6883 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:07:45.811482 6883 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:07:45.846304 6883 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:35903
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:36099
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:36099
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:45.847558 6883 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:45.849097 6883 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:45.863603 6890 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:45.863673 6889 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:45.863687 6892 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:47.032671 6891 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1166 milliseconds
I20250623 14:07:47.032804 6883 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:07:47.034165 6883 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:47.036720 6883 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:47.038086 6883 hybrid_clock.cc:648] HybridClock initialized: now 1750687667038049 us; error 54 us; skew 500 ppm
I20250623 14:07:47.038885 6883 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:47.045893 6883 webserver.cc:469] Webserver started at http://127.2.78.62:37735/ using document root <none> and password file <none>
I20250623 14:07:47.046838 6883 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:47.047057 6883 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:47.047526 6883 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:07:47.052110 6883 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "b3886a3fc7f84d65b2a8afdca1b922c4"
format_stamp: "Formatted at 2025-06-23 14:07:47 on dist-test-slave-stbh"
I20250623 14:07:47.053283 6883 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "b3886a3fc7f84d65b2a8afdca1b922c4"
format_stamp: "Formatted at 2025-06-23 14:07:47 on dist-test-slave-stbh"
I20250623 14:07:47.060436 6883 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.000s
I20250623 14:07:47.066073 6899 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:47.067126 6883 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250623 14:07:47.067456 6883 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "b3886a3fc7f84d65b2a8afdca1b922c4"
format_stamp: "Formatted at 2025-06-23 14:07:47 on dist-test-slave-stbh"
I20250623 14:07:47.067776 6883 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:47.123124 6883 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:47.124688 6883 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:47.125187 6883 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:47.198035 6883 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:36099
I20250623 14:07:47.198122 6950 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:36099 every 8 connection(s)
I20250623 14:07:47.200742 6883 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:07:47.205830 6951 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:47.211180 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 6883
I20250623 14:07:47.211587 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:07:47.223789 6951 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4: Bootstrap starting.
I20250623 14:07:47.230670 6951 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4: Neither blocks nor log segments found. Creating new log.
I20250623 14:07:47.232635 6951 log.cc:826] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:47.237625 6951 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4: No bootstrap required, opened a new log
I20250623 14:07:47.255136 6951 raft_consensus.cc:357] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 36099 } }
I20250623 14:07:47.255803 6951 raft_consensus.cc:383] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:07:47.256075 6951 raft_consensus.cc:738] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b3886a3fc7f84d65b2a8afdca1b922c4, State: Initialized, Role: FOLLOWER
I20250623 14:07:47.256726 6951 consensus_queue.cc:260] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 36099 } }
I20250623 14:07:47.257205 6951 raft_consensus.cc:397] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:07:47.257459 6951 raft_consensus.cc:491] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:07:47.257733 6951 raft_consensus.cc:3058] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:07:47.261983 6951 raft_consensus.cc:513] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 36099 } }
I20250623 14:07:47.262661 6951 leader_election.cc:304] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b3886a3fc7f84d65b2a8afdca1b922c4; no voters:
I20250623 14:07:47.264289 6951 leader_election.cc:290] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:07:47.265010 6956 raft_consensus.cc:2802] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:07:47.267474 6956 raft_consensus.cc:695] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [term 1 LEADER]: Becoming Leader. State: Replica: b3886a3fc7f84d65b2a8afdca1b922c4, State: Running, Role: LEADER
I20250623 14:07:47.268074 6951 sys_catalog.cc:564] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:07:47.268245 6956 consensus_queue.cc:237] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 36099 } }
I20250623 14:07:47.279316 6957 sys_catalog.cc:455] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 36099 } } }
I20250623 14:07:47.281311 6957 sys_catalog.cc:458] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:47.280512 6958 sys_catalog.cc:455] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [sys.catalog]: SysCatalogTable state changed. Reason: New leader b3886a3fc7f84d65b2a8afdca1b922c4. Latest consensus state: current_term: 1 leader_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b3886a3fc7f84d65b2a8afdca1b922c4" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 36099 } } }
I20250623 14:07:47.281955 6958 sys_catalog.cc:458] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4 [sys.catalog]: This master's current role is: LEADER
I20250623 14:07:47.286990 6966 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:07:47.298902 6966 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:07:47.315966 6966 catalog_manager.cc:1349] Generated new cluster ID: c9cb624f24b64b5099834c0354a051bf
I20250623 14:07:47.316287 6966 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:07:47.347750 6966 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:07:47.349179 6966 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:07:47.361718 6966 catalog_manager.cc:5955] T 00000000000000000000000000000000 P b3886a3fc7f84d65b2a8afdca1b922c4: Generated new TSK 0
I20250623 14:07:47.362833 6966 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:07:47.383992 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--builtin_ntp_servers=127.2.78.20:35903
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250623 14:07:47.689661 6975 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:47.690210 6975 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:47.690678 6975 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:47.721148 6975 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250623 14:07:47.721547 6975 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:47.722373 6975 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:07:47.757211 6975 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:35903
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:47.758549 6975 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:47.760236 6975 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:47.777542 6982 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:47.777674 6984 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:47.781921 6981 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:48.951553 6975 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250623 14:07:48.951484 6983 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:07:48.955534 6975 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:48.958037 6975 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:48.959367 6975 hybrid_clock.cc:648] HybridClock initialized: now 1750687668959338 us; error 42 us; skew 500 ppm
I20250623 14:07:48.960139 6975 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:48.967139 6975 webserver.cc:469] Webserver started at http://127.2.78.1:46335/ using document root <none> and password file <none>
I20250623 14:07:48.967954 6975 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:48.968158 6975 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:48.968596 6975 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:07:48.973016 6975 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "1652fbe0f4ed40a9b12dd55a85c47110"
format_stamp: "Formatted at 2025-06-23 14:07:48 on dist-test-slave-stbh"
I20250623 14:07:48.974141 6975 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "1652fbe0f4ed40a9b12dd55a85c47110"
format_stamp: "Formatted at 2025-06-23 14:07:48 on dist-test-slave-stbh"
I20250623 14:07:48.981177 6975 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250623 14:07:48.987020 6991 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:48.988070 6975 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250623 14:07:48.988392 6975 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "1652fbe0f4ed40a9b12dd55a85c47110"
format_stamp: "Formatted at 2025-06-23 14:07:48 on dist-test-slave-stbh"
I20250623 14:07:48.988749 6975 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:49.038394 6975 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:49.039765 6975 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:49.040179 6975 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:49.042467 6975 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:49.046290 6975 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:07:49.046490 6975 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:49.046764 6975 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:07:49.046926 6975 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:49.180254 6975 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:46015
I20250623 14:07:49.180367 7103 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:46015 every 8 connection(s)
I20250623 14:07:49.182716 6975 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:07:49.192209 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 6975
I20250623 14:07:49.192617 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:07:49.199154 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--builtin_ntp_servers=127.2.78.20:35903
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250623 14:07:49.202713 7104 heartbeater.cc:344] Connected to a master server at 127.2.78.62:36099
I20250623 14:07:49.203164 7104 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:49.204119 7104 heartbeater.cc:507] Master 127.2.78.62:36099 requested a full tablet report, sending...
I20250623 14:07:49.206665 6916 ts_manager.cc:194] Registered new tserver with Master: 1652fbe0f4ed40a9b12dd55a85c47110 (127.2.78.1:46015)
I20250623 14:07:49.208518 6916 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:52963
W20250623 14:07:49.498914 7108 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:49.499363 7108 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:49.499785 7108 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:49.529929 7108 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250623 14:07:49.530284 7108 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:49.531013 7108 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:07:49.565225 7108 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:35903
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:49.566470 7108 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:49.567991 7108 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:49.583479 7117 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:50.212577 7104 heartbeater.cc:499] Master 127.2.78.62:36099 was elected leader, sending a full tablet report...
W20250623 14:07:49.583595 7114 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:49.584985 7108 server_base.cc:1048] running on GCE node
W20250623 14:07:49.584759 7115 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:50.728556 7108 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:50.731374 7108 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:50.732798 7108 hybrid_clock.cc:648] HybridClock initialized: now 1750687670732745 us; error 74 us; skew 500 ppm
I20250623 14:07:50.733587 7108 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:50.749610 7108 webserver.cc:469] Webserver started at http://127.2.78.2:46833/ using document root <none> and password file <none>
I20250623 14:07:50.750592 7108 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:50.750775 7108 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:50.751191 7108 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:07:50.755492 7108 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "0856537dacc1490ea70585354bdea054"
format_stamp: "Formatted at 2025-06-23 14:07:50 on dist-test-slave-stbh"
I20250623 14:07:50.756546 7108 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "0856537dacc1490ea70585354bdea054"
format_stamp: "Formatted at 2025-06-23 14:07:50 on dist-test-slave-stbh"
I20250623 14:07:50.763648 7108 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.001s sys 0.008s
I20250623 14:07:50.769193 7124 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:50.770326 7108 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250623 14:07:50.770620 7108 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "0856537dacc1490ea70585354bdea054"
format_stamp: "Formatted at 2025-06-23 14:07:50 on dist-test-slave-stbh"
I20250623 14:07:50.770910 7108 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:50.823938 7108 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:50.825330 7108 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:50.825711 7108 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:50.828217 7108 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:50.832002 7108 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:07:50.832197 7108 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:50.832391 7108 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:07:50.832518 7108 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:50.965607 7108 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:34583
I20250623 14:07:50.965708 7236 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:34583 every 8 connection(s)
I20250623 14:07:50.968221 7108 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250623 14:07:50.976763 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 7108
I20250623 14:07:50.977172 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250623 14:07:50.983740 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--builtin_ntp_servers=127.2.78.20:35903
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250623 14:07:50.989126 7237 heartbeater.cc:344] Connected to a master server at 127.2.78.62:36099
I20250623 14:07:50.989655 7237 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:50.991077 7237 heartbeater.cc:507] Master 127.2.78.62:36099 requested a full tablet report, sending...
I20250623 14:07:50.993479 6916 ts_manager.cc:194] Registered new tserver with Master: 0856537dacc1490ea70585354bdea054 (127.2.78.2:34583)
I20250623 14:07:50.995234 6916 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:57711
W20250623 14:07:51.291626 7241 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:51.292089 7241 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:51.292524 7241 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:51.322945 7241 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250623 14:07:51.323292 7241 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:51.324018 7241 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:07:51.357372 7241 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:35903
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:51.358621 7241 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:51.360226 7241 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:51.375579 7248 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:51.998445 7237 heartbeater.cc:499] Master 127.2.78.62:36099 was elected leader, sending a full tablet report...
W20250623 14:07:51.375715 7251 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:51.375960 7249 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:51.377326 7241 server_base.cc:1048] running on GCE node
I20250623 14:07:52.522671 7241 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:52.524825 7241 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:52.526170 7241 hybrid_clock.cc:648] HybridClock initialized: now 1750687672526136 us; error 53 us; skew 500 ppm
I20250623 14:07:52.526933 7241 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:52.533455 7241 webserver.cc:469] Webserver started at http://127.2.78.3:33375/ using document root <none> and password file <none>
I20250623 14:07:52.534418 7241 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:52.534627 7241 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:52.535056 7241 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:07:52.539551 7241 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "1fb5e4a252184b069544f68dc4c62b99"
format_stamp: "Formatted at 2025-06-23 14:07:52 on dist-test-slave-stbh"
I20250623 14:07:52.540673 7241 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "1fb5e4a252184b069544f68dc4c62b99"
format_stamp: "Formatted at 2025-06-23 14:07:52 on dist-test-slave-stbh"
I20250623 14:07:52.547571 7241 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.004s sys 0.004s
I20250623 14:07:52.553095 7258 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:52.554157 7241 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250623 14:07:52.554502 7241 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "1fb5e4a252184b069544f68dc4c62b99"
format_stamp: "Formatted at 2025-06-23 14:07:52 on dist-test-slave-stbh"
I20250623 14:07:52.554822 7241 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:52.614110 7241 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:52.615621 7241 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:52.616092 7241 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:52.618582 7241 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:52.622793 7241 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:07:52.623013 7241 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:52.623260 7241 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:07:52.623415 7241 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:52.757311 7241 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:33001
I20250623 14:07:52.757413 7370 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:33001 every 8 connection(s)
I20250623 14:07:52.759860 7241 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250623 14:07:52.763509 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 7241
I20250623 14:07:52.764017 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250623 14:07:52.771343 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.4:0
--local_ip_for_outbound_sockets=127.2.78.4
--webserver_interface=127.2.78.4
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--builtin_ntp_servers=127.2.78.20:35903
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250623 14:07:52.782917 7371 heartbeater.cc:344] Connected to a master server at 127.2.78.62:36099
I20250623 14:07:52.783460 7371 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:52.784732 7371 heartbeater.cc:507] Master 127.2.78.62:36099 requested a full tablet report, sending...
I20250623 14:07:52.787271 6916 ts_manager.cc:194] Registered new tserver with Master: 1fb5e4a252184b069544f68dc4c62b99 (127.2.78.3:33001)
I20250623 14:07:52.788552 6916 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:50671
W20250623 14:07:53.079421 7375 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:53.079973 7375 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:53.080513 7375 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:53.111806 7375 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250623 14:07:53.112223 7375 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:53.112986 7375 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.4
I20250623 14:07:53.147996 7375 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:35903
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.4:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.2.78.4
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.4
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:53.149415 7375 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:53.151154 7375 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:53.169389 7382 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:53.791608 7371 heartbeater.cc:499] Master 127.2.78.62:36099 was elected leader, sending a full tablet report...
W20250623 14:07:53.169384 7381 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:53.172638 7375 server_base.cc:1048] running on GCE node
W20250623 14:07:53.170342 7384 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:54.310590 7375 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:54.312798 7375 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:54.314150 7375 hybrid_clock.cc:648] HybridClock initialized: now 1750687674314109 us; error 56 us; skew 500 ppm
I20250623 14:07:54.314965 7375 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:54.321447 7375 webserver.cc:469] Webserver started at http://127.2.78.4:41483/ using document root <none> and password file <none>
I20250623 14:07:54.322521 7375 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:54.322762 7375 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:54.323251 7375 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:07:54.327832 7375 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "1a73cff03a80412180342270d16aba71"
format_stamp: "Formatted at 2025-06-23 14:07:54 on dist-test-slave-stbh"
I20250623 14:07:54.328935 7375 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "1a73cff03a80412180342270d16aba71"
format_stamp: "Formatted at 2025-06-23 14:07:54 on dist-test-slave-stbh"
I20250623 14:07:54.336000 7375 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.009s sys 0.000s
I20250623 14:07:54.342092 7391 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:54.343084 7375 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250623 14:07:54.343405 7375 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "1a73cff03a80412180342270d16aba71"
format_stamp: "Formatted at 2025-06-23 14:07:54 on dist-test-slave-stbh"
I20250623 14:07:54.343725 7375 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:54.397248 7375 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:54.398727 7375 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:54.399178 7375 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:54.402297 7375 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:54.406637 7375 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:07:54.406858 7375 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:54.407109 7375 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:07:54.407294 7375 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:54.539083 7375 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.4:39651
I20250623 14:07:54.539203 7504 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.4:39651 every 8 connection(s)
I20250623 14:07:54.541587 7375 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250623 14:07:54.551362 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 7375
I20250623 14:07:54.551791 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250623 14:07:54.558668 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.5:0
--local_ip_for_outbound_sockets=127.2.78.5
--webserver_interface=127.2.78.5
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--builtin_ntp_servers=127.2.78.20:35903
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250623 14:07:54.563246 7505 heartbeater.cc:344] Connected to a master server at 127.2.78.62:36099
I20250623 14:07:54.563647 7505 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:54.564620 7505 heartbeater.cc:507] Master 127.2.78.62:36099 requested a full tablet report, sending...
I20250623 14:07:54.566649 6916 ts_manager.cc:194] Registered new tserver with Master: 1a73cff03a80412180342270d16aba71 (127.2.78.4:39651)
I20250623 14:07:54.568125 6916 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.4:33805
W20250623 14:07:54.880915 7509 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:07:54.881441 7509 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:07:54.881948 7509 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:07:54.912699 7509 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250623 14:07:54.913134 7509 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:07:54.913967 7509 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.5
I20250623 14:07:54.952065 7509 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:35903
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.5:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.2.78.5
--webserver_port=0
--tserver_master_addrs=127.2.78.62:36099
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.5
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:07:54.953385 7509 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:07:54.955088 7509 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:07:54.972887 7516 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:07:55.571398 7505 heartbeater.cc:499] Master 127.2.78.62:36099 was elected leader, sending a full tablet report...
W20250623 14:07:54.973448 7518 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:54.972892 7515 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:07:56.213052 7517 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1242 milliseconds
I20250623 14:07:56.213161 7509 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:07:56.218274 7509 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:07:56.221097 7509 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:07:56.222628 7509 hybrid_clock.cc:648] HybridClock initialized: now 1750687676222581 us; error 64 us; skew 500 ppm
I20250623 14:07:56.223418 7509 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:07:56.236340 7509 webserver.cc:469] Webserver started at http://127.2.78.5:35865/ using document root <none> and password file <none>
I20250623 14:07:56.237318 7509 fs_manager.cc:362] Metadata directory not provided
I20250623 14:07:56.237533 7509 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:07:56.238093 7509 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:07:56.243470 7509 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "d100051b7596465c9ec276a906aa3579"
format_stamp: "Formatted at 2025-06-23 14:07:56 on dist-test-slave-stbh"
I20250623 14:07:56.244602 7509 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "d100051b7596465c9ec276a906aa3579"
format_stamp: "Formatted at 2025-06-23 14:07:56 on dist-test-slave-stbh"
I20250623 14:07:56.252041 7509 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.003s
I20250623 14:07:56.257850 7527 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:56.258868 7509 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250623 14:07:56.259225 7509 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "d100051b7596465c9ec276a906aa3579"
format_stamp: "Formatted at 2025-06-23 14:07:56 on dist-test-slave-stbh"
I20250623 14:07:56.259557 7509 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:07:56.321480 7509 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:07:56.322883 7509 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:07:56.323276 7509 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:07:56.325654 7509 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:07:56.329495 7509 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:07:56.329715 7509 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:56.330003 7509 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:07:56.330158 7509 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:07:56.459264 7509 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.5:44917
I20250623 14:07:56.459370 7639 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.5:44917 every 8 connection(s)
I20250623 14:07:56.461714 7509 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250623 14:07:56.471240 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 7509
I20250623 14:07:56.471978 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250623 14:07:56.483975 7640 heartbeater.cc:344] Connected to a master server at 127.2.78.62:36099
I20250623 14:07:56.484402 7640 heartbeater.cc:461] Registering TS with master...
I20250623 14:07:56.485383 7640 heartbeater.cc:507] Master 127.2.78.62:36099 requested a full tablet report, sending...
I20250623 14:07:56.487440 6916 ts_manager.cc:194] Registered new tserver with Master: d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917)
I20250623 14:07:56.488682 6916 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.5:46983
I20250623 14:07:56.493594 2360 external_mini_cluster.cc:934] 5 TS(s) registered with all masters
I20250623 14:07:56.529243 6916 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39994:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250623 14:07:56.604140 7172 tablet_service.cc:1468] Processing CreateTablet for tablet 530795b3c01642039db54fabe1e0de2d (DEFAULT_TABLE table=TestTable [id=5322081aae204fd294424f57bf34e23d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:07:56.606103 7172 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 530795b3c01642039db54fabe1e0de2d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:56.606228 7306 tablet_service.cc:1468] Processing CreateTablet for tablet 530795b3c01642039db54fabe1e0de2d (DEFAULT_TABLE table=TestTable [id=5322081aae204fd294424f57bf34e23d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:07:56.606179 7575 tablet_service.cc:1468] Processing CreateTablet for tablet 530795b3c01642039db54fabe1e0de2d (DEFAULT_TABLE table=TestTable [id=5322081aae204fd294424f57bf34e23d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:07:56.607908 7306 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 530795b3c01642039db54fabe1e0de2d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:56.608729 7575 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 530795b3c01642039db54fabe1e0de2d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:07:56.629331 7659 tablet_bootstrap.cc:492] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: Bootstrap starting.
I20250623 14:07:56.633302 7660 tablet_bootstrap.cc:492] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: Bootstrap starting.
I20250623 14:07:56.637037 7659 tablet_bootstrap.cc:654] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: Neither blocks nor log segments found. Creating new log.
I20250623 14:07:56.639899 7659 log.cc:826] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:56.640404 7661 tablet_bootstrap.cc:492] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: Bootstrap starting.
I20250623 14:07:56.640937 7660 tablet_bootstrap.cc:654] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: Neither blocks nor log segments found. Creating new log.
I20250623 14:07:56.643301 7660 log.cc:826] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:56.644707 7659 tablet_bootstrap.cc:492] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: No bootstrap required, opened a new log
I20250623 14:07:56.645246 7659 ts_tablet_manager.cc:1397] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: Time spent bootstrapping tablet: real 0.017s user 0.010s sys 0.004s
I20250623 14:07:56.647782 7661 tablet_bootstrap.cc:654] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: Neither blocks nor log segments found. Creating new log.
I20250623 14:07:56.649094 7660 tablet_bootstrap.cc:492] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: No bootstrap required, opened a new log
I20250623 14:07:56.649600 7660 ts_tablet_manager.cc:1397] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: Time spent bootstrapping tablet: real 0.017s user 0.016s sys 0.000s
I20250623 14:07:56.650038 7661 log.cc:826] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: Log is configured to *not* fsync() on all Append() calls
I20250623 14:07:56.657178 7661 tablet_bootstrap.cc:492] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: No bootstrap required, opened a new log
I20250623 14:07:56.657670 7661 ts_tablet_manager.cc:1397] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: Time spent bootstrapping tablet: real 0.018s user 0.010s sys 0.004s
I20250623 14:07:56.664089 7659 raft_consensus.cc:357] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.664776 7659 raft_consensus.cc:383] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:07:56.665052 7659 raft_consensus.cc:738] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1fb5e4a252184b069544f68dc4c62b99, State: Initialized, Role: FOLLOWER
I20250623 14:07:56.665817 7659 consensus_queue.cc:260] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.671469 7659 ts_tablet_manager.cc:1428] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: Time spent starting tablet: real 0.026s user 0.006s sys 0.018s
I20250623 14:07:56.675424 7660 raft_consensus.cc:357] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.676352 7660 raft_consensus.cc:383] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:07:56.676646 7660 raft_consensus.cc:738] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0856537dacc1490ea70585354bdea054, State: Initialized, Role: FOLLOWER
I20250623 14:07:56.677560 7660 consensus_queue.cc:260] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.681844 7660 ts_tablet_manager.cc:1428] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: Time spent starting tablet: real 0.032s user 0.032s sys 0.000s
I20250623 14:07:56.683243 7661 raft_consensus.cc:357] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.683867 7661 raft_consensus.cc:383] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:07:56.684211 7661 raft_consensus.cc:738] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d100051b7596465c9ec276a906aa3579, State: Initialized, Role: FOLLOWER
I20250623 14:07:56.684955 7661 consensus_queue.cc:260] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.687861 7640 heartbeater.cc:499] Master 127.2.78.62:36099 was elected leader, sending a full tablet report...
I20250623 14:07:56.688828 7661 ts_tablet_manager.cc:1428] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: Time spent starting tablet: real 0.031s user 0.019s sys 0.008s
W20250623 14:07:56.716653 7641 tablet.cc:2378] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250623 14:07:56.728893 7238 tablet.cc:2378] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250623 14:07:56.768075 7372 tablet.cc:2378] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:07:56.817291 7666 raft_consensus.cc:491] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:07:56.817771 7666 raft_consensus.cc:513] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.819983 7666 leader_election.cc:290] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917), 1fb5e4a252184b069544f68dc4c62b99 (127.2.78.3:33001)
I20250623 14:07:56.831454 7595 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "530795b3c01642039db54fabe1e0de2d" candidate_uuid: "0856537dacc1490ea70585354bdea054" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d100051b7596465c9ec276a906aa3579" is_pre_election: true
I20250623 14:07:56.832332 7595 raft_consensus.cc:2466] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0856537dacc1490ea70585354bdea054 in term 0.
I20250623 14:07:56.833516 7127 leader_election.cc:304] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0856537dacc1490ea70585354bdea054, d100051b7596465c9ec276a906aa3579; no voters:
I20250623 14:07:56.833832 7326 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "530795b3c01642039db54fabe1e0de2d" candidate_uuid: "0856537dacc1490ea70585354bdea054" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1fb5e4a252184b069544f68dc4c62b99" is_pre_election: true
I20250623 14:07:56.834448 7666 raft_consensus.cc:2802] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:07:56.834723 7666 raft_consensus.cc:491] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:07:56.834635 7326 raft_consensus.cc:2466] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0856537dacc1490ea70585354bdea054 in term 0.
I20250623 14:07:56.835000 7666 raft_consensus.cc:3058] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:07:56.839437 7666 raft_consensus.cc:513] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.840739 7666 leader_election.cc:290] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [CANDIDATE]: Term 1 election: Requested vote from peers d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917), 1fb5e4a252184b069544f68dc4c62b99 (127.2.78.3:33001)
I20250623 14:07:56.841598 7595 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "530795b3c01642039db54fabe1e0de2d" candidate_uuid: "0856537dacc1490ea70585354bdea054" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d100051b7596465c9ec276a906aa3579"
I20250623 14:07:56.841698 7326 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "530795b3c01642039db54fabe1e0de2d" candidate_uuid: "0856537dacc1490ea70585354bdea054" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1fb5e4a252184b069544f68dc4c62b99"
I20250623 14:07:56.842092 7595 raft_consensus.cc:3058] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:07:56.842146 7326 raft_consensus.cc:3058] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:07:56.846539 7326 raft_consensus.cc:2466] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0856537dacc1490ea70585354bdea054 in term 1.
I20250623 14:07:56.846534 7595 raft_consensus.cc:2466] T 530795b3c01642039db54fabe1e0de2d P d100051b7596465c9ec276a906aa3579 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0856537dacc1490ea70585354bdea054 in term 1.
I20250623 14:07:56.847361 7127 leader_election.cc:304] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0856537dacc1490ea70585354bdea054, d100051b7596465c9ec276a906aa3579; no voters:
I20250623 14:07:56.847926 7666 raft_consensus.cc:2802] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:07:56.849411 7666 raft_consensus.cc:695] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [term 1 LEADER]: Becoming Leader. State: Replica: 0856537dacc1490ea70585354bdea054, State: Running, Role: LEADER
I20250623 14:07:56.850209 7666 consensus_queue.cc:237] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } }
I20250623 14:07:56.861254 6915 catalog_manager.cc:5582] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 reported cstate change: term changed from 0 to 1, leader changed from <none> to 0856537dacc1490ea70585354bdea054 (127.2.78.2). New cstate: current_term: 1 leader_uuid: "0856537dacc1490ea70585354bdea054" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d100051b7596465c9ec276a906aa3579" member_type: VOTER last_known_addr { host: "127.2.78.5" port: 44917 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "0856537dacc1490ea70585354bdea054" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 34583 } health_report { overall_health: HEALTHY } } }
I20250623 14:07:56.898660 2360 external_mini_cluster.cc:934] 5 TS(s) registered with all masters
I20250623 14:07:56.901981 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 0856537dacc1490ea70585354bdea054 to finish bootstrapping
I20250623 14:07:56.915827 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1fb5e4a252184b069544f68dc4c62b99 to finish bootstrapping
I20250623 14:07:56.925544 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d100051b7596465c9ec276a906aa3579 to finish bootstrapping
I20250623 14:07:56.935441 2360 test_util.cc:276] Using random seed: -1122456330
I20250623 14:07:56.958205 2360 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250623 14:07:56.959084 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 7509
W20250623 14:07:57.006752 7127 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.5:44917: connect: Connection refused (error 111)
I20250623 14:07:57.007591 7326 raft_consensus.cc:1273] T 530795b3c01642039db54fabe1e0de2d P 1fb5e4a252184b069544f68dc4c62b99 [term 1 FOLLOWER]: Refusing update from remote peer 0856537dacc1490ea70585354bdea054: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250623 14:07:57.010856 7670 consensus_queue.cc:1035] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1fb5e4a252184b069544f68dc4c62b99" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 33001 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250623 14:07:57.013038 7127 consensus_peers.cc:487] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 -> Peer d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917): Couldn't send request to peer d100051b7596465c9ec276a906aa3579. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.5:44917: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250623 14:07:57.032441 7680 mvcc.cc:204] Tried to move back new op lower bound from 7170816725000851456 to 7170816724385095680. Current Snapshot: MvccSnapshot[applied={T|T < 7170816725000851456}]
I20250623 14:07:57.035835 7683 mvcc.cc:204] Tried to move back new op lower bound from 7170816725000851456 to 7170816724385095680. Current Snapshot: MvccSnapshot[applied={T|T < 7170816725000851456}]
W20250623 14:07:59.638877 7127 consensus_peers.cc:487] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 -> Peer d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917): Couldn't send request to peer d100051b7596465c9ec276a906aa3579. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.5:44917: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250623 14:07:59.880311 7172 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
W20250623 14:07:59.908324 7100 debug-util.cc:398] Leaking SignalData structure 0x7b08000b2520 after lost signal to thread 6976
I20250623 14:07:59.924777 7039 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250623 14:07:59.925729 7306 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250623 14:07:59.950438 7440 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250623 14:08:00.017576 7501 debug-util.cc:398] Leaking SignalData structure 0x7b08000ac5c0 after lost signal to thread 7376
W20250623 14:08:02.307744 7127 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.5:44917: connect: Connection refused (error 111) [suppressed 9 similar messages]
W20250623 14:08:02.323009 7127 consensus_peers.cc:487] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 -> Peer d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917): Couldn't send request to peer d100051b7596465c9ec276a906aa3579. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.5:44917: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20250623 14:08:02.581919 7172 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250623 14:08:02.642403 7440 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250623 14:08:02.663491 7306 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250623 14:08:02.676647 7039 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250623 14:08:04.327941 7100 debug-util.cc:398] Leaking SignalData structure 0x7b08000ac280 after lost signal to thread 6976
W20250623 14:08:04.328939 7100 debug-util.cc:398] Leaking SignalData structure 0x7b08000b5de0 after lost signal to thread 7103
W20250623 14:08:04.736977 7127 consensus_peers.cc:487] T 530795b3c01642039db54fabe1e0de2d P 0856537dacc1490ea70585354bdea054 -> Peer d100051b7596465c9ec276a906aa3579 (127.2.78.5:44917): Couldn't send request to peer d100051b7596465c9ec276a906aa3579. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.5:44917: connect: Connection refused (error 111). This is attempt 16: this message will repeat every 5th retry.
I20250623 14:08:05.431810 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 6975
I20250623 14:08:05.460218 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 7108
I20250623 14:08:05.499444 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 7241
I20250623 14:08:05.532793 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 7375
I20250623 14:08:05.560766 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 6883
2025-06-23T14:08:05Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (20190 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (20190 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250623 14:08:05.626194 2360 test_util.cc:276] Using random seed: -1113765577
I20250623 14:08:05.630635 2360 ts_itest-base.cc:115] Starting cluster with:
I20250623 14:08:05.630820 2360 ts_itest-base.cc:116] --------------
I20250623 14:08:05.630999 2360 ts_itest-base.cc:117] 1 tablet servers
I20250623 14:08:05.631155 2360 ts_itest-base.cc:118] 1 replicas per TS
I20250623 14:08:05.631345 2360 ts_itest-base.cc:119] --------------
2025-06-23T14:08:05Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:08:05Z Disabled control of system clock
I20250623 14:08:05.677464 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40621
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:37873
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:40621 with env {}
W20250623 14:08:05.978940 7787 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:05.979552 7787 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:05.980020 7787 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:06.011294 7787 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:08:06.011636 7787 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:06.011888 7787 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:08:06.012135 7787 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:08:06.048213 7787 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:37873
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:40621
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:40621
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:06.049815 7787 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:06.051594 7787 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:06.067767 7793 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:06.067905 7794 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:06.070214 7787 server_base.cc:1048] running on GCE node
W20250623 14:08:06.068157 7796 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:07.230602 7787 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:07.233633 7787 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:07.235088 7787 hybrid_clock.cc:648] HybridClock initialized: now 1750687687235019 us; error 83 us; skew 500 ppm
I20250623 14:08:07.235882 7787 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:07.246806 7787 webserver.cc:469] Webserver started at http://127.2.78.62:43769/ using document root <none> and password file <none>
I20250623 14:08:07.247728 7787 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:07.247908 7787 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:07.248342 7787 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:07.252684 7787 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "397e08976d8341ac8b26536a40e1bcca"
format_stamp: "Formatted at 2025-06-23 14:08:07 on dist-test-slave-stbh"
I20250623 14:08:07.253697 7787 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "397e08976d8341ac8b26536a40e1bcca"
format_stamp: "Formatted at 2025-06-23 14:08:07 on dist-test-slave-stbh"
I20250623 14:08:07.260594 7787 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250623 14:08:07.266085 7803 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:07.267061 7787 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.003s
I20250623 14:08:07.267376 7787 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
uuid: "397e08976d8341ac8b26536a40e1bcca"
format_stamp: "Formatted at 2025-06-23 14:08:07 on dist-test-slave-stbh"
I20250623 14:08:07.267721 7787 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:07.330883 7787 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:07.332357 7787 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:07.332837 7787 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:07.401940 7787 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:40621
I20250623 14:08:07.402014 7854 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:40621 every 8 connection(s)
I20250623 14:08:07.404568 7787 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250623 14:08:07.409533 7855 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:07.410286 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 7787
I20250623 14:08:07.410787 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250623 14:08:07.434891 7855 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca: Bootstrap starting.
I20250623 14:08:07.440269 7855 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:07.441957 7855 log.cc:826] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:07.446379 7855 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca: No bootstrap required, opened a new log
I20250623 14:08:07.463527 7855 raft_consensus.cc:357] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "397e08976d8341ac8b26536a40e1bcca" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40621 } }
I20250623 14:08:07.464164 7855 raft_consensus.cc:383] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:07.464390 7855 raft_consensus.cc:738] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 397e08976d8341ac8b26536a40e1bcca, State: Initialized, Role: FOLLOWER
I20250623 14:08:07.465250 7855 consensus_queue.cc:260] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "397e08976d8341ac8b26536a40e1bcca" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40621 } }
I20250623 14:08:07.465813 7855 raft_consensus.cc:397] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:08:07.466097 7855 raft_consensus.cc:491] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:08:07.466502 7855 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:07.471038 7855 raft_consensus.cc:513] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "397e08976d8341ac8b26536a40e1bcca" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40621 } }
I20250623 14:08:07.471688 7855 leader_election.cc:304] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 397e08976d8341ac8b26536a40e1bcca; no voters:
I20250623 14:08:07.473294 7855 leader_election.cc:290] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:08:07.473995 7860 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:08:07.476580 7860 raft_consensus.cc:695] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [term 1 LEADER]: Becoming Leader. State: Replica: 397e08976d8341ac8b26536a40e1bcca, State: Running, Role: LEADER
I20250623 14:08:07.477213 7860 consensus_queue.cc:237] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "397e08976d8341ac8b26536a40e1bcca" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40621 } }
I20250623 14:08:07.477523 7855 sys_catalog.cc:564] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:08:07.488867 7861 sys_catalog.cc:455] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "397e08976d8341ac8b26536a40e1bcca" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "397e08976d8341ac8b26536a40e1bcca" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40621 } } }
I20250623 14:08:07.489974 7862 sys_catalog.cc:455] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [sys.catalog]: SysCatalogTable state changed. Reason: New leader 397e08976d8341ac8b26536a40e1bcca. Latest consensus state: current_term: 1 leader_uuid: "397e08976d8341ac8b26536a40e1bcca" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "397e08976d8341ac8b26536a40e1bcca" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 40621 } } }
I20250623 14:08:07.490517 7861 sys_catalog.cc:458] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:07.490725 7862 sys_catalog.cc:458] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:07.493877 7870 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:08:07.504654 7870 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:08:07.519928 7870 catalog_manager.cc:1349] Generated new cluster ID: cd499653f3204fa5998259258667db11
I20250623 14:08:07.520272 7870 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:08:07.543116 7870 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:08:07.545125 7870 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:08:07.561879 7870 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 397e08976d8341ac8b26536a40e1bcca: Generated new TSK 0
I20250623 14:08:07.562953 7870 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:08:07.577330 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40621
--builtin_ntp_servers=127.2.78.20:37873
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250623 14:08:07.893545 7879 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:07.894098 7879 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:07.894577 7879 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:07.926069 7879 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:07.926904 7879 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:08:07.962960 7879 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:37873
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:40621
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:07.964290 7879 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:07.966069 7879 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:07.984172 7886 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:07.985903 7885 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:09.145166 7887 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
W20250623 14:08:09.146826 7888 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:09.146870 7879 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:09.149694 7879 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:09.152406 7879 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:09.153848 7879 hybrid_clock.cc:648] HybridClock initialized: now 1750687689153805 us; error 56 us; skew 500 ppm
I20250623 14:08:09.154666 7879 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:09.161942 7879 webserver.cc:469] Webserver started at http://127.2.78.1:45173/ using document root <none> and password file <none>
I20250623 14:08:09.162879 7879 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:09.163086 7879 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:09.163573 7879 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:09.168396 7879 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "fed74b4a1af14293b6a4d31ec001c290"
format_stamp: "Formatted at 2025-06-23 14:08:09 on dist-test-slave-stbh"
I20250623 14:08:09.169564 7879 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "fed74b4a1af14293b6a4d31ec001c290"
format_stamp: "Formatted at 2025-06-23 14:08:09 on dist-test-slave-stbh"
I20250623 14:08:09.176999 7879 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.001s
I20250623 14:08:09.183547 7895 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:09.184716 7879 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250623 14:08:09.185055 7879 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "fed74b4a1af14293b6a4d31ec001c290"
format_stamp: "Formatted at 2025-06-23 14:08:09 on dist-test-slave-stbh"
I20250623 14:08:09.185389 7879 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:09.247905 7879 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:09.249294 7879 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:09.249708 7879 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:09.252166 7879 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:09.256067 7879 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:08:09.256278 7879 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:09.256552 7879 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:08:09.256702 7879 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:09.386300 7879 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:45533
I20250623 14:08:09.386422 8007 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:45533 every 8 connection(s)
I20250623 14:08:09.388788 7879 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250623 14:08:09.391345 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 7879
I20250623 14:08:09.391882 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1750687545053178-2360-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250623 14:08:09.410113 8008 heartbeater.cc:344] Connected to a master server at 127.2.78.62:40621
I20250623 14:08:09.410552 8008 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:09.411767 8008 heartbeater.cc:507] Master 127.2.78.62:40621 requested a full tablet report, sending...
I20250623 14:08:09.414389 7819 ts_manager.cc:194] Registered new tserver with Master: fed74b4a1af14293b6a4d31ec001c290 (127.2.78.1:45533)
I20250623 14:08:09.416330 7819 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:37865
I20250623 14:08:09.425267 2360 external_mini_cluster.cc:934] 1 TS(s) registered with all masters
I20250623 14:08:09.455175 7819 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48070:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250623 14:08:09.510298 7943 tablet_service.cc:1468] Processing CreateTablet for tablet be5258e96526402d86448f5676f348b9 (DEFAULT_TABLE table=TestTable [id=4bfc695962fb4236bb4adaf013ecce5c]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:09.511778 7943 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet be5258e96526402d86448f5676f348b9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:09.530637 8023 tablet_bootstrap.cc:492] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290: Bootstrap starting.
I20250623 14:08:09.536123 8023 tablet_bootstrap.cc:654] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:09.537827 8023 log.cc:826] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:09.542238 8023 tablet_bootstrap.cc:492] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290: No bootstrap required, opened a new log
I20250623 14:08:09.542599 8023 ts_tablet_manager.cc:1397] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290: Time spent bootstrapping tablet: real 0.012s user 0.011s sys 0.000s
I20250623 14:08:09.559799 8023 raft_consensus.cc:357] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fed74b4a1af14293b6a4d31ec001c290" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 45533 } }
I20250623 14:08:09.560381 8023 raft_consensus.cc:383] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:09.560567 8023 raft_consensus.cc:738] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fed74b4a1af14293b6a4d31ec001c290, State: Initialized, Role: FOLLOWER
I20250623 14:08:09.561231 8023 consensus_queue.cc:260] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fed74b4a1af14293b6a4d31ec001c290" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 45533 } }
I20250623 14:08:09.561712 8023 raft_consensus.cc:397] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:08:09.561964 8023 raft_consensus.cc:491] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:08:09.562227 8023 raft_consensus.cc:3058] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:09.566738 8023 raft_consensus.cc:513] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fed74b4a1af14293b6a4d31ec001c290" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 45533 } }
I20250623 14:08:09.567546 8023 leader_election.cc:304] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: fed74b4a1af14293b6a4d31ec001c290; no voters:
I20250623 14:08:09.569233 8023 leader_election.cc:290] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:08:09.569628 8025 raft_consensus.cc:2802] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:08:09.572202 8025 raft_consensus.cc:695] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [term 1 LEADER]: Becoming Leader. State: Replica: fed74b4a1af14293b6a4d31ec001c290, State: Running, Role: LEADER
I20250623 14:08:09.573141 8008 heartbeater.cc:499] Master 127.2.78.62:40621 was elected leader, sending a full tablet report...
I20250623 14:08:09.573141 8025 consensus_queue.cc:237] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fed74b4a1af14293b6a4d31ec001c290" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 45533 } }
I20250623 14:08:09.573947 8023 ts_tablet_manager.cc:1428] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290: Time spent starting tablet: real 0.031s user 0.023s sys 0.009s
I20250623 14:08:09.585953 7819 catalog_manager.cc:5582] T be5258e96526402d86448f5676f348b9 P fed74b4a1af14293b6a4d31ec001c290 reported cstate change: term changed from 0 to 1, leader changed from <none> to fed74b4a1af14293b6a4d31ec001c290 (127.2.78.1). New cstate: current_term: 1 leader_uuid: "fed74b4a1af14293b6a4d31ec001c290" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fed74b4a1af14293b6a4d31ec001c290" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 45533 } health_report { overall_health: HEALTHY } } }
I20250623 14:08:09.611958 2360 external_mini_cluster.cc:934] 1 TS(s) registered with all masters
I20250623 14:08:09.614794 2360 ts_itest-base.cc:246] Waiting for 1 tablets on tserver fed74b4a1af14293b6a4d31ec001c290 to finish bootstrapping
I20250623 14:08:12.280822 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 7879
I20250623 14:08:12.306206 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 7787
2025-06-23T14:08:12Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (6735 ms)
[----------] 1 test from ListTableCliSimpleParamTest (6735 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250623 14:08:12.362308 2360 test_util.cc:276] Using random seed: -1107029462
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (12 ms)
[----------] 1 test from ListTableCliParamTest (12 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-06-23T14:08:12Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-23T14:08:12Z Disabled control of system clock
I20250623 14:08:12.415568 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:34837
--webserver_interface=127.2.78.62
--webserver_port=0
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:34837 with env {}
W20250623 14:08:12.709539 8051 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:12.710176 8051 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:12.710664 8051 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:12.741669 8051 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:08:12.742014 8051 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:12.742283 8051 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:08:12.742514 8051 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:08:12.776816 8051 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:34837
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:34837
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:12.778179 8051 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:12.779788 8051 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:12.795504 8058 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:12.795517 8060 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:12.797113 8051 server_base.cc:1048] running on GCE node
W20250623 14:08:12.795543 8057 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:13.953440 8051 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:13.956027 8051 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:13.957448 8051 hybrid_clock.cc:648] HybridClock initialized: now 1750687693957410 us; error 56 us; skew 500 ppm
I20250623 14:08:13.958289 8051 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:13.970983 8051 webserver.cc:469] Webserver started at http://127.2.78.62:43497/ using document root <none> and password file <none>
I20250623 14:08:13.971944 8051 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:13.972155 8051 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:13.972638 8051 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:13.977082 8051 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/instance:
uuid: "8bfb032e118b4349b500756a861927f7"
format_stamp: "Formatted at 2025-06-23 14:08:13 on dist-test-slave-stbh"
I20250623 14:08:13.978295 8051 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal/instance:
uuid: "8bfb032e118b4349b500756a861927f7"
format_stamp: "Formatted at 2025-06-23 14:08:13 on dist-test-slave-stbh"
I20250623 14:08:13.985600 8051 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250623 14:08:13.991196 8067 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:13.992174 8051 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.003s
I20250623 14:08:13.992499 8051 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
uuid: "8bfb032e118b4349b500756a861927f7"
format_stamp: "Formatted at 2025-06-23 14:08:13 on dist-test-slave-stbh"
I20250623 14:08:13.992817 8051 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:14.066628 8051 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:14.068102 8051 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:14.068537 8051 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:14.137432 8051 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:34837
I20250623 14:08:14.137527 8118 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:34837 every 8 connection(s)
I20250623 14:08:14.140023 8051 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb
I20250623 14:08:14.145009 8119 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:14.148465 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8051
I20250623 14:08:14.148862 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal/instance
I20250623 14:08:14.168629 8119 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7: Bootstrap starting.
I20250623 14:08:14.175462 8119 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:14.177145 8119 log.cc:826] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:14.182067 8119 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7: No bootstrap required, opened a new log
I20250623 14:08:14.199579 8119 raft_consensus.cc:357] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8bfb032e118b4349b500756a861927f7" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:14.200215 8119 raft_consensus.cc:383] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:14.200408 8119 raft_consensus.cc:738] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8bfb032e118b4349b500756a861927f7, State: Initialized, Role: FOLLOWER
I20250623 14:08:14.200985 8119 consensus_queue.cc:260] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8bfb032e118b4349b500756a861927f7" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:14.201472 8119 raft_consensus.cc:397] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:08:14.201731 8119 raft_consensus.cc:491] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:08:14.202129 8119 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:14.206290 8119 raft_consensus.cc:513] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8bfb032e118b4349b500756a861927f7" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:14.206969 8119 leader_election.cc:304] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8bfb032e118b4349b500756a861927f7; no voters:
I20250623 14:08:14.208555 8119 leader_election.cc:290] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:08:14.209213 8124 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:08:14.211347 8124 raft_consensus.cc:695] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [term 1 LEADER]: Becoming Leader. State: Replica: 8bfb032e118b4349b500756a861927f7, State: Running, Role: LEADER
I20250623 14:08:14.212018 8124 consensus_queue.cc:237] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8bfb032e118b4349b500756a861927f7" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:14.212613 8119 sys_catalog.cc:564] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:08:14.223009 8125 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "8bfb032e118b4349b500756a861927f7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8bfb032e118b4349b500756a861927f7" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } } }
I20250623 14:08:14.223692 8125 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:14.225060 8126 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 8bfb032e118b4349b500756a861927f7. Latest consensus state: current_term: 1 leader_uuid: "8bfb032e118b4349b500756a861927f7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8bfb032e118b4349b500756a861927f7" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } } }
I20250623 14:08:14.225854 8126 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7 [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:14.229331 8132 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:08:14.239497 8132 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:08:14.257580 8132 catalog_manager.cc:1349] Generated new cluster ID: e7384033bf1a4503bba5bb6e26b337d4
I20250623 14:08:14.257886 8132 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:08:14.289368 8132 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:08:14.290859 8132 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:08:14.305451 8132 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 8bfb032e118b4349b500756a861927f7: Generated new TSK 0
I20250623 14:08:14.306538 8132 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250623 14:08:14.328243 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:0
--local_ip_for_outbound_sockets=127.2.78.1
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:34837
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250623 14:08:14.633679 8143 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:14.634220 8143 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:14.634709 8143 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:14.667151 8143 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:14.667997 8143 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:08:14.703181 8143 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=0
--tserver_master_addrs=127.2.78.62:34837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:14.704500 8143 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:14.706269 8143 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:14.723255 8149 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:16.121938 8148 debug-util.cc:398] Leaking SignalData structure 0x7b0800000a80 after lost signal to thread 8143
W20250623 14:08:14.723495 8150 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:16.220516 8143 thread.cc:640] OpenStack (cloud detector) Time spent creating pthread: real 1.501s user 0.431s sys 0.870s
W20250623 14:08:16.222648 8152 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:16.222606 8151 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1500 milliseconds
W20250623 14:08:16.222990 8143 thread.cc:606] OpenStack (cloud detector) Time spent starting thread: real 1.504s user 0.431s sys 0.871s
I20250623 14:08:16.223343 8143 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:16.228039 8143 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:16.230687 8143 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:16.232210 8143 hybrid_clock.cc:648] HybridClock initialized: now 1750687696232159 us; error 46 us; skew 500 ppm
I20250623 14:08:16.233325 8143 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:16.240962 8143 webserver.cc:469] Webserver started at http://127.2.78.1:46293/ using document root <none> and password file <none>
I20250623 14:08:16.242221 8143 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:16.242514 8143 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:16.243150 8143 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:16.249667 8143 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/instance:
uuid: "559c0ec04ad84e79986e59ea400ac9dd"
format_stamp: "Formatted at 2025-06-23 14:08:16 on dist-test-slave-stbh"
I20250623 14:08:16.250947 8143 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal/instance:
uuid: "559c0ec04ad84e79986e59ea400ac9dd"
format_stamp: "Formatted at 2025-06-23 14:08:16 on dist-test-slave-stbh"
I20250623 14:08:16.258277 8143 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.002s
I20250623 14:08:16.264323 8160 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:16.265569 8143 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.003s
I20250623 14:08:16.265920 8143 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
uuid: "559c0ec04ad84e79986e59ea400ac9dd"
format_stamp: "Formatted at 2025-06-23 14:08:16 on dist-test-slave-stbh"
I20250623 14:08:16.266268 8143 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:16.326748 8143 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:16.328375 8143 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:16.328825 8143 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:16.331699 8143 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:16.336122 8143 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:08:16.336344 8143 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:16.336586 8143 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:08:16.336741 8143 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:16.500002 8143 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:46593
I20250623 14:08:16.500164 8272 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:46593 every 8 connection(s)
I20250623 14:08:16.502626 8143 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb
I20250623 14:08:16.506888 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8143
I20250623 14:08:16.507665 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal/instance
I20250623 14:08:16.519229 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:0
--local_ip_for_outbound_sockets=127.2.78.2
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:34837
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250623 14:08:16.526746 8273 heartbeater.cc:344] Connected to a master server at 127.2.78.62:34837
I20250623 14:08:16.527184 8273 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:16.528182 8273 heartbeater.cc:507] Master 127.2.78.62:34837 requested a full tablet report, sending...
I20250623 14:08:16.531129 8084 ts_manager.cc:194] Registered new tserver with Master: 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593)
I20250623 14:08:16.534075 8084 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:55049
W20250623 14:08:16.822402 8277 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:16.822907 8277 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:16.823438 8277 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:16.854215 8277 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:16.855129 8277 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:08:16.889799 8277 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=0
--tserver_master_addrs=127.2.78.62:34837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:16.891105 8277 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:16.892724 8277 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:16.908059 8284 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:17.538167 8273 heartbeater.cc:499] Master 127.2.78.62:34837 was elected leader, sending a full tablet report...
W20250623 14:08:16.908181 8283 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:16.909978 8286 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:18.043150 8285 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:08:18.043247 8277 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:18.047307 8277 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:18.050040 8277 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:18.051487 8277 hybrid_clock.cc:648] HybridClock initialized: now 1750687698051426 us; error 72 us; skew 500 ppm
I20250623 14:08:18.052277 8277 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:18.059052 8277 webserver.cc:469] Webserver started at http://127.2.78.2:33749/ using document root <none> and password file <none>
I20250623 14:08:18.059966 8277 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:18.060155 8277 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:18.060606 8277 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:18.065076 8277 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/instance:
uuid: "1386ca6603524bb28bfa1f42a60e7e91"
format_stamp: "Formatted at 2025-06-23 14:08:18 on dist-test-slave-stbh"
I20250623 14:08:18.066226 8277 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal/instance:
uuid: "1386ca6603524bb28bfa1f42a60e7e91"
format_stamp: "Formatted at 2025-06-23 14:08:18 on dist-test-slave-stbh"
I20250623 14:08:18.073264 8277 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250623 14:08:18.078845 8293 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:18.079879 8277 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250623 14:08:18.080227 8277 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
uuid: "1386ca6603524bb28bfa1f42a60e7e91"
format_stamp: "Formatted at 2025-06-23 14:08:18 on dist-test-slave-stbh"
I20250623 14:08:18.080565 8277 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:18.135142 8277 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:18.136586 8277 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:18.137025 8277 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:18.140112 8277 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:18.144815 8277 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:08:18.145056 8277 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:18.145313 8277 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:08:18.145473 8277 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:18.279084 8277 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:39863
I20250623 14:08:18.279192 8405 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:39863 every 8 connection(s)
I20250623 14:08:18.281608 8277 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb
I20250623 14:08:18.285677 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8277
I20250623 14:08:18.286253 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal/instance
I20250623 14:08:18.292959 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:0
--local_ip_for_outbound_sockets=127.2.78.3
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:34837
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250623 14:08:18.302832 8406 heartbeater.cc:344] Connected to a master server at 127.2.78.62:34837
I20250623 14:08:18.303267 8406 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:18.304303 8406 heartbeater.cc:507] Master 127.2.78.62:34837 requested a full tablet report, sending...
I20250623 14:08:18.306569 8084 ts_manager.cc:194] Registered new tserver with Master: 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
I20250623 14:08:18.308367 8084 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:54061
W20250623 14:08:18.595597 8410 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:18.596062 8410 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:18.596515 8410 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:18.628391 8410 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:18.629268 8410 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:08:18.664011 8410 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=0
--tserver_master_addrs=127.2.78.62:34837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:18.665302 8410 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:18.666957 8410 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:18.682964 8416 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:19.311411 8406 heartbeater.cc:499] Master 127.2.78.62:34837 was elected leader, sending a full tablet report...
W20250623 14:08:18.683743 8419 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:18.683041 8417 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:19.829264 8418 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:08:19.829389 8410 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:19.830569 8410 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:19.832734 8410 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:19.834060 8410 hybrid_clock.cc:648] HybridClock initialized: now 1750687699834023 us; error 51 us; skew 500 ppm
I20250623 14:08:19.834827 8410 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:19.845753 8410 webserver.cc:469] Webserver started at http://127.2.78.3:44243/ using document root <none> and password file <none>
I20250623 14:08:19.846763 8410 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:19.846987 8410 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:19.847445 8410 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:19.851747 8410 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/instance:
uuid: "0934df7e51594c3fbf25e74bf3d4404c"
format_stamp: "Formatted at 2025-06-23 14:08:19 on dist-test-slave-stbh"
I20250623 14:08:19.852785 8410 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal/instance:
uuid: "0934df7e51594c3fbf25e74bf3d4404c"
format_stamp: "Formatted at 2025-06-23 14:08:19 on dist-test-slave-stbh"
I20250623 14:08:19.859441 8410 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.003s sys 0.005s
I20250623 14:08:19.864938 8426 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:19.865991 8410 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250623 14:08:19.866322 8410 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
uuid: "0934df7e51594c3fbf25e74bf3d4404c"
format_stamp: "Formatted at 2025-06-23 14:08:19 on dist-test-slave-stbh"
I20250623 14:08:19.866667 8410 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:19.930308 8410 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:19.931717 8410 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:19.932129 8410 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:19.934636 8410 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:19.938314 8410 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250623 14:08:19.938527 8410 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:19.938767 8410 ts_tablet_manager.cc:610] Registered 0 tablets
I20250623 14:08:19.938925 8410 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:20.071913 8410 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:43197
I20250623 14:08:20.072013 8538 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:43197 every 8 connection(s)
I20250623 14:08:20.074342 8410 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb
I20250623 14:08:20.083623 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8410
I20250623 14:08:20.084022 2360 external_mini_cluster.cc:1427] Reading /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal/instance
I20250623 14:08:20.095199 8539 heartbeater.cc:344] Connected to a master server at 127.2.78.62:34837
I20250623 14:08:20.095630 8539 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:20.096594 8539 heartbeater.cc:507] Master 127.2.78.62:34837 requested a full tablet report, sending...
I20250623 14:08:20.098543 8084 ts_manager.cc:194] Registered new tserver with Master: 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197)
I20250623 14:08:20.099738 8084 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:60927
I20250623 14:08:20.104019 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:08:20.133980 2360 test_util.cc:276] Using random seed: -1099257787
I20250623 14:08:20.177260 8084 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:44410:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250623 14:08:20.179644 8084 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250623 14:08:20.226840 8474 tablet_service.cc:1468] Processing CreateTablet for tablet 51f609bc37a94f64ad7219b32595819b (DEFAULT_TABLE table=pre_rebuild [id=6b4b88174ece460a93f887073f52c84e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:20.228703 8474 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 51f609bc37a94f64ad7219b32595819b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:20.234624 8208 tablet_service.cc:1468] Processing CreateTablet for tablet 51f609bc37a94f64ad7219b32595819b (DEFAULT_TABLE table=pre_rebuild [id=6b4b88174ece460a93f887073f52c84e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:20.236186 8208 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 51f609bc37a94f64ad7219b32595819b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:20.238034 8341 tablet_service.cc:1468] Processing CreateTablet for tablet 51f609bc37a94f64ad7219b32595819b (DEFAULT_TABLE table=pre_rebuild [id=6b4b88174ece460a93f887073f52c84e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:20.239890 8341 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 51f609bc37a94f64ad7219b32595819b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:20.257449 8562 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Bootstrap starting.
I20250623 14:08:20.259943 8563 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Bootstrap starting.
I20250623 14:08:20.266526 8562 tablet_bootstrap.cc:654] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:20.268333 8563 tablet_bootstrap.cc:654] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:20.268779 8564 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Bootstrap starting.
I20250623 14:08:20.269187 8562 log.cc:826] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:20.270771 8563 log.cc:826] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:20.274677 8562 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: No bootstrap required, opened a new log
I20250623 14:08:20.275122 8562 ts_tablet_manager.cc:1397] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Time spent bootstrapping tablet: real 0.018s user 0.015s sys 0.000s
I20250623 14:08:20.276165 8564 tablet_bootstrap.cc:654] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:20.276646 8563 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: No bootstrap required, opened a new log
I20250623 14:08:20.277259 8563 ts_tablet_manager.cc:1397] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Time spent bootstrapping tablet: real 0.018s user 0.011s sys 0.006s
I20250623 14:08:20.281998 8564 log.cc:826] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:20.301505 8562 raft_consensus.cc:357] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.302644 8562 raft_consensus.cc:383] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:20.302992 8562 raft_consensus.cc:738] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 559c0ec04ad84e79986e59ea400ac9dd, State: Initialized, Role: FOLLOWER
I20250623 14:08:20.304105 8562 consensus_queue.cc:260] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.304842 8564 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: No bootstrap required, opened a new log
I20250623 14:08:20.305449 8564 ts_tablet_manager.cc:1397] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Time spent bootstrapping tablet: real 0.038s user 0.003s sys 0.026s
I20250623 14:08:20.307366 8563 raft_consensus.cc:357] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.308279 8563 raft_consensus.cc:383] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:20.308612 8563 raft_consensus.cc:738] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0934df7e51594c3fbf25e74bf3d4404c, State: Initialized, Role: FOLLOWER
I20250623 14:08:20.309478 8563 consensus_queue.cc:260] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.314438 8562 ts_tablet_manager.cc:1428] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Time spent starting tablet: real 0.039s user 0.030s sys 0.008s
I20250623 14:08:20.318156 8539 heartbeater.cc:499] Master 127.2.78.62:34837 was elected leader, sending a full tablet report...
I20250623 14:08:20.321094 8563 ts_tablet_manager.cc:1428] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Time spent starting tablet: real 0.044s user 0.029s sys 0.007s
W20250623 14:08:20.328791 8540 tablet.cc:2378] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:08:20.335160 8564 raft_consensus.cc:357] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.335858 8564 raft_consensus.cc:383] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:20.336117 8564 raft_consensus.cc:738] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1386ca6603524bb28bfa1f42a60e7e91, State: Initialized, Role: FOLLOWER
I20250623 14:08:20.336792 8564 consensus_queue.cc:260] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.340025 8564 ts_tablet_manager.cc:1428] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Time spent starting tablet: real 0.034s user 0.024s sys 0.003s
W20250623 14:08:20.511255 8274 tablet.cc:2378] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250623 14:08:20.538558 8407 tablet.cc:2378] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:08:20.623411 8569 raft_consensus.cc:491] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:08:20.623874 8569 raft_consensus.cc:513] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.626091 8569 leader_election.cc:290] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593), 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
I20250623 14:08:20.636741 8228 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "559c0ec04ad84e79986e59ea400ac9dd" is_pre_election: true
I20250623 14:08:20.637677 8228 raft_consensus.cc:2466] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 0.
I20250623 14:08:20.639062 8427 leader_election.cc:304] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0934df7e51594c3fbf25e74bf3d4404c, 559c0ec04ad84e79986e59ea400ac9dd; no voters:
I20250623 14:08:20.639288 8361 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91" is_pre_election: true
I20250623 14:08:20.639909 8569 raft_consensus.cc:2802] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:08:20.639990 8361 raft_consensus.cc:2466] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 0.
I20250623 14:08:20.640161 8569 raft_consensus.cc:491] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:08:20.640448 8569 raft_consensus.cc:3058] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:20.644845 8569 raft_consensus.cc:513] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.646274 8569 leader_election.cc:290] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 election: Requested vote from peers 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593), 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
I20250623 14:08:20.646837 8228 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "559c0ec04ad84e79986e59ea400ac9dd"
I20250623 14:08:20.647078 8361 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91"
I20250623 14:08:20.647220 8228 raft_consensus.cc:3058] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:20.647511 8361 raft_consensus.cc:3058] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:20.651846 8228 raft_consensus.cc:2466] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 1.
I20250623 14:08:20.652109 8361 raft_consensus.cc:2466] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 1.
I20250623 14:08:20.652663 8427 leader_election.cc:304] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0934df7e51594c3fbf25e74bf3d4404c, 559c0ec04ad84e79986e59ea400ac9dd; no voters:
I20250623 14:08:20.653214 8569 raft_consensus.cc:2802] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:08:20.654793 8569 raft_consensus.cc:695] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 LEADER]: Becoming Leader. State: Replica: 0934df7e51594c3fbf25e74bf3d4404c, State: Running, Role: LEADER
I20250623 14:08:20.655495 8569 consensus_queue.cc:237] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:20.666814 8084 catalog_manager.cc:5582] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c reported cstate change: term changed from 0 to 1, leader changed from <none> to 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3). New cstate: current_term: 1 leader_uuid: "0934df7e51594c3fbf25e74bf3d4404c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } health_report { overall_health: UNKNOWN } } }
I20250623 14:08:20.821969 8228 raft_consensus.cc:1273] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Refusing update from remote peer 0934df7e51594c3fbf25e74bf3d4404c: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250623 14:08:20.821969 8361 raft_consensus.cc:1273] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Refusing update from remote peer 0934df7e51594c3fbf25e74bf3d4404c: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250623 14:08:20.823441 8573 consensus_queue.cc:1035] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [LEADER]: Connected to new peer: Peer: permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:08:20.824157 8569 consensus_queue.cc:1035] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [LEADER]: Connected to new peer: Peer: permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:08:20.855811 8582 mvcc.cc:204] Tried to move back new op lower bound from 7170816822550441984 to 7170816821892005888. Current Snapshot: MvccSnapshot[applied={T|T < 7170816822550441984}]
I20250623 14:08:25.790645 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8051
W20250623 14:08:25.893225 8539 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:34837 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:34837: connect: Connection refused (error 111)
W20250623 14:08:25.923704 8406 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:34837 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:34837: connect: Connection refused (error 111)
W20250623 14:08:25.939653 8273 heartbeater.cc:646] Failed to heartbeat to 127.2.78.62:34837 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.78.62:34837: connect: Connection refused (error 111)
W20250623 14:08:26.147450 8616 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:26.148054 8616 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:26.181310 8616 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250623 14:08:27.635157 8624 debug-util.cc:398] Leaking SignalData structure 0x7b0800036040 after lost signal to thread 8616
W20250623 14:08:27.635771 8624 kernel_stack_watchdog.cc:198] Thread 8616 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:641 for 403ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250623 14:08:27.752177 8616 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.522s user 0.001s sys 0.001s
W20250623 14:08:27.850562 8616 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.621s user 0.004s sys 0.012s
I20250623 14:08:27.926051 8616 minidump.cc:252] Setting minidump size limit to 20M
I20250623 14:08:27.928027 8616 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:27.929188 8616 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:27.940083 8650 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:27.941222 8651 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:27.943513 8653 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:27.944264 8616 server_base.cc:1048] running on GCE node
I20250623 14:08:27.945374 8616 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250623 14:08:27.945856 8616 hybrid_clock.cc:648] HybridClock initialized: now 1750687707945833 us; error 316722 us; skew 500 ppm
I20250623 14:08:27.946593 8616 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:27.951943 8616 webserver.cc:469] Webserver started at http://0.0.0.0:39735/ using document root <none> and password file <none>
I20250623 14:08:27.952752 8616 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:27.952970 8616 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:27.953389 8616 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250623 14:08:27.957466 8616 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/instance:
uuid: "f312bdd0c4b247849ed02009fb7562cf"
format_stamp: "Formatted at 2025-06-23 14:08:27 on dist-test-slave-stbh"
I20250623 14:08:27.958549 8616 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal/instance:
uuid: "f312bdd0c4b247849ed02009fb7562cf"
format_stamp: "Formatted at 2025-06-23 14:08:27 on dist-test-slave-stbh"
I20250623 14:08:27.964272 8616 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.001s
I20250623 14:08:27.968904 8660 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:27.969805 8616 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250623 14:08:27.970119 8616 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
uuid: "f312bdd0c4b247849ed02009fb7562cf"
format_stamp: "Formatted at 2025-06-23 14:08:27 on dist-test-slave-stbh"
I20250623 14:08:27.970438 8616 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:28.184940 8616 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:28.186376 8616 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:28.186820 8616 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:28.191366 8616 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:28.205678 8616 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Bootstrap starting.
I20250623 14:08:28.210453 8616 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:28.212069 8616 log.cc:826] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:28.216037 8616 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: No bootstrap required, opened a new log
I20250623 14:08:28.231825 8616 raft_consensus.cc:357] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER }
I20250623 14:08:28.232297 8616 raft_consensus.cc:383] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:28.232492 8616 raft_consensus.cc:738] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f312bdd0c4b247849ed02009fb7562cf, State: Initialized, Role: FOLLOWER
I20250623 14:08:28.233094 8616 consensus_queue.cc:260] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER }
I20250623 14:08:28.233544 8616 raft_consensus.cc:397] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:08:28.233781 8616 raft_consensus.cc:491] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:08:28.234054 8616 raft_consensus.cc:3058] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:28.237917 8616 raft_consensus.cc:513] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER }
I20250623 14:08:28.238749 8616 leader_election.cc:304] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f312bdd0c4b247849ed02009fb7562cf; no voters:
I20250623 14:08:28.240309 8616 leader_election.cc:290] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [CANDIDATE]: Term 1 election: Requested vote from peers
I20250623 14:08:28.240551 8667 raft_consensus.cc:2802] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:08:28.242713 8667 raft_consensus.cc:695] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 LEADER]: Becoming Leader. State: Replica: f312bdd0c4b247849ed02009fb7562cf, State: Running, Role: LEADER
I20250623 14:08:28.243435 8667 consensus_queue.cc:237] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER }
I20250623 14:08:28.250634 8668 sys_catalog.cc:455] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: SysCatalogTable state changed. Reason: New leader f312bdd0c4b247849ed02009fb7562cf. Latest consensus state: current_term: 1 leader_uuid: "f312bdd0c4b247849ed02009fb7562cf" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER } }
I20250623 14:08:28.251055 8668 sys_catalog.cc:458] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:28.251735 8669 sys_catalog.cc:455] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "f312bdd0c4b247849ed02009fb7562cf" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER } }
I20250623 14:08:28.252123 8669 sys_catalog.cc:458] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:28.263221 8616 tablet_replica.cc:331] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: stopping tablet replica
I20250623 14:08:28.263732 8616 raft_consensus.cc:2241] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 LEADER]: Raft consensus shutting down.
I20250623 14:08:28.264137 8616 raft_consensus.cc:2270] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Raft consensus is shut down!
I20250623 14:08:28.266098 8616 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250623 14:08:28.266515 8616 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250623 14:08:28.320900 8616 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250623 14:08:29.349908 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8143
W20250623 14:08:29.385536 8427 connection.cc:537] client connection to 127.2.78.1:46593 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250623 14:08:29.385968 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8277
W20250623 14:08:29.386101 8427 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250623 14:08:29.391224 8427 consensus_peers.cc:487] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c -> Peer 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593): Couldn't send request to peer 559c0ec04ad84e79986e59ea400ac9dd. Status: Network error: Client connection negotiation failed: client connection to 127.2.78.1:46593: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250623 14:08:29.418905 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8410
I20250623 14:08:29.459115 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:34837
--webserver_interface=127.2.78.62
--webserver_port=43497
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.78.62:34837 with env {}
W20250623 14:08:29.760416 8678 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:29.760959 8678 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:29.761371 8678 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:29.792310 8678 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250623 14:08:29.792598 8678 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:29.792814 8678 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250623 14:08:29.793004 8678 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250623 14:08:29.827656 8678 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.78.62:34837
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.78.62:34837
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.2.78.62
--webserver_port=43497
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:29.828917 8678 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:29.830490 8678 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:29.844182 8685 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:29.844178 8684 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:29.846086 8687 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:31.003368 8686 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:08:31.003521 8678 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:31.007244 8678 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:31.010313 8678 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:31.011746 8678 hybrid_clock.cc:648] HybridClock initialized: now 1750687711011705 us; error 59 us; skew 500 ppm
I20250623 14:08:31.012552 8678 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:31.027290 8678 webserver.cc:469] Webserver started at http://127.2.78.62:43497/ using document root <none> and password file <none>
I20250623 14:08:31.028188 8678 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:31.028389 8678 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:31.036063 8678 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250623 14:08:31.040498 8694 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:31.041451 8678 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250623 14:08:31.041795 8678 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
uuid: "f312bdd0c4b247849ed02009fb7562cf"
format_stamp: "Formatted at 2025-06-23 14:08:27 on dist-test-slave-stbh"
I20250623 14:08:31.043666 8678 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:31.106913 8678 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:31.108327 8678 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:31.108774 8678 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:31.178494 8678 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.62:34837
I20250623 14:08:31.178582 8745 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.62:34837 every 8 connection(s)
I20250623 14:08:31.181183 8678 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb
I20250623 14:08:31.184077 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8678
I20250623 14:08:31.186192 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.1:46593
--local_ip_for_outbound_sockets=127.2.78.1
--tserver_master_addrs=127.2.78.62:34837
--webserver_port=46293
--webserver_interface=127.2.78.1
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250623 14:08:31.195916 8746 sys_catalog.cc:263] Verifying existing consensus state
I20250623 14:08:31.216696 8746 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Bootstrap starting.
I20250623 14:08:31.230747 8746 log.cc:826] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:31.245051 8746 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:08:31.246038 8746 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Bootstrap complete.
I20250623 14:08:31.275805 8746 raft_consensus.cc:357] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:31.276698 8746 raft_consensus.cc:738] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: f312bdd0c4b247849ed02009fb7562cf, State: Initialized, Role: FOLLOWER
I20250623 14:08:31.277614 8746 consensus_queue.cc:260] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:31.278311 8746 raft_consensus.cc:397] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250623 14:08:31.278636 8746 raft_consensus.cc:491] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250623 14:08:31.278983 8746 raft_consensus.cc:3058] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:08:31.285081 8746 raft_consensus.cc:513] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:31.285976 8746 leader_election.cc:304] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f312bdd0c4b247849ed02009fb7562cf; no voters:
I20250623 14:08:31.288036 8746 leader_election.cc:290] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [CANDIDATE]: Term 2 election: Requested vote from peers
I20250623 14:08:31.288389 8750 raft_consensus.cc:2802] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:08:31.291787 8750 raft_consensus.cc:695] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [term 2 LEADER]: Becoming Leader. State: Replica: f312bdd0c4b247849ed02009fb7562cf, State: Running, Role: LEADER
I20250623 14:08:31.292837 8746 sys_catalog.cc:564] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: configured and running, proceeding with master startup.
I20250623 14:08:31.292621 8750 consensus_queue.cc:237] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } }
I20250623 14:08:31.308307 8751 sys_catalog.cc:455] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "f312bdd0c4b247849ed02009fb7562cf" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } } }
I20250623 14:08:31.309365 8751 sys_catalog.cc:458] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:31.311745 8752 sys_catalog.cc:455] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: SysCatalogTable state changed. Reason: New leader f312bdd0c4b247849ed02009fb7562cf. Latest consensus state: current_term: 2 leader_uuid: "f312bdd0c4b247849ed02009fb7562cf" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f312bdd0c4b247849ed02009fb7562cf" member_type: VOTER last_known_addr { host: "127.2.78.62" port: 34837 } } }
I20250623 14:08:31.312520 8752 sys_catalog.cc:458] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf [sys.catalog]: This master's current role is: LEADER
I20250623 14:08:31.315485 8757 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250623 14:08:31.328536 8757 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=2f9f2f0e69964959b8b7e7224d7da704]
I20250623 14:08:31.335547 8757 tablet_loader.cc:96] loaded metadata for tablet 51f609bc37a94f64ad7219b32595819b (table pre_rebuild [id=2f9f2f0e69964959b8b7e7224d7da704])
I20250623 14:08:31.336925 8757 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250623 14:08:31.366577 8757 catalog_manager.cc:1349] Generated new cluster ID: 5b9f49e2939a472ca54026f9f683909e
I20250623 14:08:31.366909 8757 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250623 14:08:31.403832 8757 catalog_manager.cc:1372] Generated new certificate authority record
I20250623 14:08:31.405181 8757 catalog_manager.cc:1506] Loading token signing keys...
I20250623 14:08:31.421947 8757 catalog_manager.cc:5955] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Generated new TSK 0
I20250623 14:08:31.422801 8757 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250623 14:08:31.572645 8748 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:31.573146 8748 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:31.573638 8748 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:31.604492 8748 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:31.605345 8748 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.1
I20250623 14:08:31.640120 8748 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.1:46593
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.2.78.1
--webserver_port=46293
--tserver_master_addrs=127.2.78.62:34837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.1
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:31.641710 8748 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:31.643939 8748 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:31.670661 8775 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:31.670667 8774 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:31.672709 8748 server_base.cc:1048] running on GCE node
W20250623 14:08:31.671265 8777 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:32.837313 8748 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:32.840041 8748 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:32.841511 8748 hybrid_clock.cc:648] HybridClock initialized: now 1750687712841460 us; error 50 us; skew 500 ppm
I20250623 14:08:32.842640 8748 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:32.850076 8748 webserver.cc:469] Webserver started at http://127.2.78.1:46293/ using document root <none> and password file <none>
I20250623 14:08:32.851307 8748 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:32.851599 8748 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:32.862103 8748 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.007s sys 0.001s
I20250623 14:08:32.866849 8784 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:32.867954 8748 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250623 14:08:32.868286 8748 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
uuid: "559c0ec04ad84e79986e59ea400ac9dd"
format_stamp: "Formatted at 2025-06-23 14:08:16 on dist-test-slave-stbh"
I20250623 14:08:32.870388 8748 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:32.929201 8748 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:32.930634 8748 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:32.931018 8748 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:32.933557 8748 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:32.939697 8791 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250623 14:08:32.950368 8748 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250623 14:08:32.950600 8748 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s user 0.002s sys 0.000s
I20250623 14:08:32.950877 8748 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250623 14:08:32.955353 8748 ts_tablet_manager.cc:610] Registered 1 tablets
I20250623 14:08:32.955544 8748 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.000s
I20250623 14:08:32.956037 8791 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Bootstrap starting.
I20250623 14:08:33.144899 8748 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.1:46593
I20250623 14:08:33.145087 8898 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.1:46593 every 8 connection(s)
I20250623 14:08:33.148703 8748 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb
I20250623 14:08:33.153501 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8748
I20250623 14:08:33.155457 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.2:39863
--local_ip_for_outbound_sockets=127.2.78.2
--tserver_master_addrs=127.2.78.62:34837
--webserver_port=33749
--webserver_interface=127.2.78.2
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250623 14:08:33.186738 8899 heartbeater.cc:344] Connected to a master server at 127.2.78.62:34837
I20250623 14:08:33.187261 8899 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:33.188503 8899 heartbeater.cc:507] Master 127.2.78.62:34837 requested a full tablet report, sending...
I20250623 14:08:33.193346 8711 ts_manager.cc:194] Registered new tserver with Master: 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593)
I20250623 14:08:33.201033 8711 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.1:40065
I20250623 14:08:33.263630 8791 log.cc:826] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Log is configured to *not* fsync() on all Append() calls
W20250623 14:08:33.481628 8903 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:33.482395 8903 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:33.482968 8903 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:33.514261 8903 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:33.515095 8903 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.2
I20250623 14:08:33.549444 8903 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.2:39863
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.2.78.2
--webserver_port=33749
--tserver_master_addrs=127.2.78.62:34837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.2
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:33.550748 8903 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:33.552352 8903 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:33.569187 8911 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:34.204993 8899 heartbeater.cc:499] Master 127.2.78.62:34837 was elected leader, sending a full tablet report...
W20250623 14:08:33.569265 8913 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:33.569257 8910 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:34.748467 8912 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250623 14:08:34.749044 8903 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:34.752566 8903 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:34.755215 8903 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:34.756630 8903 hybrid_clock.cc:648] HybridClock initialized: now 1750687714756571 us; error 75 us; skew 500 ppm
I20250623 14:08:34.757405 8903 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:34.764415 8903 webserver.cc:469] Webserver started at http://127.2.78.2:33749/ using document root <none> and password file <none>
I20250623 14:08:34.765309 8903 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:34.765516 8903 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:34.773248 8903 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250623 14:08:34.778107 8920 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:34.779131 8903 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250623 14:08:34.779433 8903 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
uuid: "1386ca6603524bb28bfa1f42a60e7e91"
format_stamp: "Formatted at 2025-06-23 14:08:18 on dist-test-slave-stbh"
I20250623 14:08:34.781222 8903 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:34.888574 8903 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:34.890038 8903 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:34.890463 8903 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:34.893381 8903 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:34.899963 8927 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250623 14:08:34.907317 8903 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250623 14:08:34.907519 8903 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250623 14:08:34.907796 8903 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250623 14:08:34.912477 8903 ts_tablet_manager.cc:610] Registered 1 tablets
I20250623 14:08:34.912724 8903 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.000s
I20250623 14:08:34.913098 8927 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Bootstrap starting.
I20250623 14:08:35.093618 8903 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.2:39863
I20250623 14:08:35.093842 9033 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.2:39863 every 8 connection(s)
I20250623 14:08:35.097419 8903 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb
I20250623 14:08:35.106931 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 8903
I20250623 14:08:35.114020 2360 external_mini_cluster.cc:1351] Running /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
/tmp/dist-test-task0z5RNj/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.78.3:43197
--local_ip_for_outbound_sockets=127.2.78.3
--tserver_master_addrs=127.2.78.62:34837
--webserver_port=44243
--webserver_interface=127.2.78.3
--builtin_ntp_servers=127.2.78.20:36191
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250623 14:08:35.154878 9034 heartbeater.cc:344] Connected to a master server at 127.2.78.62:34837
I20250623 14:08:35.155481 9034 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:35.156800 9034 heartbeater.cc:507] Master 127.2.78.62:34837 requested a full tablet report, sending...
I20250623 14:08:35.161540 8711 ts_manager.cc:194] Registered new tserver with Master: 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
I20250623 14:08:35.165331 8711 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.2:38803
I20250623 14:08:35.236799 8927 log.cc:826] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:35.423462 8791 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:08:35.424507 8791 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Bootstrap complete.
I20250623 14:08:35.426316 8791 ts_tablet_manager.cc:1397] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Time spent bootstrapping tablet: real 2.471s user 2.341s sys 0.088s
I20250623 14:08:35.443507 8791 raft_consensus.cc:357] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:35.446687 8791 raft_consensus.cc:738] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 559c0ec04ad84e79986e59ea400ac9dd, State: Initialized, Role: FOLLOWER
I20250623 14:08:35.447618 8791 consensus_queue.cc:260] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:35.451139 8791 ts_tablet_manager.cc:1428] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Time spent starting tablet: real 0.025s user 0.024s sys 0.000s
W20250623 14:08:35.553267 9038 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250623 14:08:35.553784 9038 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250623 14:08:35.554301 9038 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250623 14:08:35.585275 9038 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250623 14:08:35.586151 9038 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.78.3
I20250623 14:08:35.620240 9038 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.78.20:36191
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.78.3:43197
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.2.78.3
--webserver_port=44243
--tserver_master_addrs=127.2.78.62:34837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.78.3
--log_dir=/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision 1550f239ea6e50bb8f84a51255fe1b8accb536aa
build type FASTDEBUG
built by None at 23 Jun 2025 13:57:07 UTC on 24a791456cd2
build id 6733
TSAN enabled
I20250623 14:08:35.621493 9038 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250623 14:08:35.623088 9038 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250623 14:08:35.639801 9048 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250623 14:08:36.169943 9034 heartbeater.cc:499] Master 127.2.78.62:34837 was elected leader, sending a full tablet report...
W20250623 14:08:35.650971 9050 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:35.641135 9047 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250623 14:08:37.032873 9049 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1390 milliseconds
I20250623 14:08:37.032974 9038 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250623 14:08:37.034188 9038 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250623 14:08:37.036873 9038 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250623 14:08:37.038305 9038 hybrid_clock.cc:648] HybridClock initialized: now 1750687717038263 us; error 55 us; skew 500 ppm
I20250623 14:08:37.039099 9038 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250623 14:08:37.051424 9038 webserver.cc:469] Webserver started at http://127.2.78.3:44243/ using document root <none> and password file <none>
I20250623 14:08:37.052338 9038 fs_manager.cc:362] Metadata directory not provided
I20250623 14:08:37.052558 9038 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250623 14:08:37.060797 9038 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250623 14:08:37.065877 9057 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250623 14:08:37.066937 9038 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250623 14:08:37.067240 9038 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data,/tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
uuid: "0934df7e51594c3fbf25e74bf3d4404c"
format_stamp: "Formatted at 2025-06-23 14:08:19 on dist-test-slave-stbh"
I20250623 14:08:37.069173 9038 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250623 14:08:37.132745 9038 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250623 14:08:37.134198 9038 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250623 14:08:37.134639 9038 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250623 14:08:37.137174 9038 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250623 14:08:37.143486 9064 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250623 14:08:37.154188 9038 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250623 14:08:37.154431 9038 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.002s sys 0.000s
I20250623 14:08:37.154734 9038 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250623 14:08:37.159268 9038 ts_tablet_manager.cc:610] Registered 1 tablets
I20250623 14:08:37.159459 9038 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.004s sys 0.000s
I20250623 14:08:37.159924 9064 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Bootstrap starting.
I20250623 14:08:37.244736 9079 raft_consensus.cc:491] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:08:37.246894 9079 raft_consensus.cc:513] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:37.255712 9079 leader_election.cc:290] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197), 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
W20250623 14:08:37.271342 8785 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.78.3:43197: connect: Connection refused (error 111)
W20250623 14:08:37.291096 8785 leader_election.cc:336] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197): Network error: Client connection negotiation failed: client connection to 127.2.78.3:43197: connect: Connection refused (error 111)
I20250623 14:08:37.295673 8989 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "559c0ec04ad84e79986e59ea400ac9dd" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91" is_pre_election: true
W20250623 14:08:37.308449 8785 leader_election.cc:343] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863): Illegal state: must be running to vote when last-logged opid is not known
I20250623 14:08:37.308939 8785 leader_election.cc:304] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 559c0ec04ad84e79986e59ea400ac9dd; no voters: 0934df7e51594c3fbf25e74bf3d4404c, 1386ca6603524bb28bfa1f42a60e7e91
I20250623 14:08:37.310101 9079 raft_consensus.cc:2747] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250623 14:08:37.463714 9038 rpc_server.cc:307] RPC server started. Bound to: 127.2.78.3:43197
I20250623 14:08:37.463948 9175 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.78.3:43197 every 8 connection(s)
I20250623 14:08:37.467926 9038 server_base.cc:1180] Dumped server information to /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb
I20250623 14:08:37.471226 2360 external_mini_cluster.cc:1413] Started /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu as pid 9038
I20250623 14:08:37.509367 9064 log.cc:826] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Log is configured to *not* fsync() on all Append() calls
I20250623 14:08:37.514961 9176 heartbeater.cc:344] Connected to a master server at 127.2.78.62:34837
I20250623 14:08:37.515450 9176 heartbeater.cc:461] Registering TS with master...
I20250623 14:08:37.516624 9176 heartbeater.cc:507] Master 127.2.78.62:34837 requested a full tablet report, sending...
I20250623 14:08:37.520354 8711 ts_manager.cc:194] Registered new tserver with Master: 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197)
I20250623 14:08:37.523775 8711 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.78.3:55869
I20250623 14:08:37.525177 2360 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250623 14:08:37.951749 8927 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:08:37.952562 8927 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Bootstrap complete.
I20250623 14:08:37.954005 8927 ts_tablet_manager.cc:1397] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Time spent bootstrapping tablet: real 3.041s user 2.913s sys 0.100s
I20250623 14:08:37.958881 8927 raft_consensus.cc:357] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:37.960728 8927 raft_consensus.cc:738] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1386ca6603524bb28bfa1f42a60e7e91, State: Initialized, Role: FOLLOWER
I20250623 14:08:37.961515 8927 consensus_queue.cc:260] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:37.964521 8927 ts_tablet_manager.cc:1428] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Time spent starting tablet: real 0.010s user 0.006s sys 0.004s
I20250623 14:08:38.527935 9176 heartbeater.cc:499] Master 127.2.78.62:34837 was elected leader, sending a full tablet report...
I20250623 14:08:39.142629 9189 raft_consensus.cc:491] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:08:39.143019 9189 raft_consensus.cc:513] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:39.144508 9189 leader_election.cc:290] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197), 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
I20250623 14:08:39.159713 8989 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "559c0ec04ad84e79986e59ea400ac9dd" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91" is_pre_election: true
I20250623 14:08:39.160441 8989 raft_consensus.cc:2466] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 559c0ec04ad84e79986e59ea400ac9dd in term 1.
I20250623 14:08:39.161646 8785 leader_election.cc:304] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1386ca6603524bb28bfa1f42a60e7e91, 559c0ec04ad84e79986e59ea400ac9dd; no voters:
I20250623 14:08:39.162567 9189 raft_consensus.cc:2802] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250623 14:08:39.162899 9189 raft_consensus.cc:491] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:08:39.163211 9189 raft_consensus.cc:3058] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:08:39.172683 9189 raft_consensus.cc:513] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:39.174823 9189 leader_election.cc:290] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 election: Requested vote from peers 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197), 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863)
I20250623 14:08:39.168217 9131 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "559c0ec04ad84e79986e59ea400ac9dd" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "0934df7e51594c3fbf25e74bf3d4404c" is_pre_election: true
I20250623 14:08:39.176131 9130 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "559c0ec04ad84e79986e59ea400ac9dd" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "0934df7e51594c3fbf25e74bf3d4404c"
I20250623 14:08:39.176532 8989 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "51f609bc37a94f64ad7219b32595819b" candidate_uuid: "559c0ec04ad84e79986e59ea400ac9dd" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91"
I20250623 14:08:39.177042 8989 raft_consensus.cc:3058] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Advancing to term 2
W20250623 14:08:39.179842 8785 leader_election.cc:343] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197): Illegal state: must be running to vote when last-logged opid is not known
I20250623 14:08:39.185798 8989 raft_consensus.cc:2466] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 559c0ec04ad84e79986e59ea400ac9dd in term 2.
I20250623 14:08:39.186877 8785 leader_election.cc:304] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1386ca6603524bb28bfa1f42a60e7e91, 559c0ec04ad84e79986e59ea400ac9dd; no voters: 0934df7e51594c3fbf25e74bf3d4404c
I20250623 14:08:39.187628 9189 raft_consensus.cc:2802] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 FOLLOWER]: Leader election won for term 2
I20250623 14:08:39.189244 9189 raft_consensus.cc:695] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 LEADER]: Becoming Leader. State: Replica: 559c0ec04ad84e79986e59ea400ac9dd, State: Running, Role: LEADER
I20250623 14:08:39.190272 9189 consensus_queue.cc:237] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:39.203292 8711 catalog_manager.cc:5582] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd reported cstate change: term changed from 0 to 2, leader changed from <none> to 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1), VOTER 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3) added, VOTER 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2) added, VOTER 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1) added. New cstate: current_term: 2 leader_uuid: "559c0ec04ad84e79986e59ea400ac9dd" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } health_report { overall_health: UNKNOWN } } }
I20250623 14:08:39.334403 9064 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250623 14:08:39.335223 9064 tablet_bootstrap.cc:492] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Bootstrap complete.
I20250623 14:08:39.336525 9064 ts_tablet_manager.cc:1397] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Time spent bootstrapping tablet: real 2.177s user 2.068s sys 0.076s
I20250623 14:08:39.341535 9064 raft_consensus.cc:357] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:39.343524 9064 raft_consensus.cc:738] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0934df7e51594c3fbf25e74bf3d4404c, State: Initialized, Role: FOLLOWER
I20250623 14:08:39.344226 9064 consensus_queue.cc:260] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:39.346939 9064 ts_tablet_manager.cc:1428] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Time spent starting tablet: real 0.010s user 0.012s sys 0.000s
I20250623 14:08:39.670159 8989 raft_consensus.cc:1273] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 2 FOLLOWER]: Refusing update from remote peer 559c0ec04ad84e79986e59ea400ac9dd: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250623 14:08:39.671383 9189 consensus_queue.cc:1035] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [LEADER]: Connected to new peer: Peer: permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
I20250623 14:08:39.685295 9131 raft_consensus.cc:3058] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Advancing to term 2
I20250623 14:08:39.700204 9131 raft_consensus.cc:1273] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 2 FOLLOWER]: Refusing update from remote peer 559c0ec04ad84e79986e59ea400ac9dd: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250623 14:08:39.711575 9189 consensus_queue.cc:1035] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [LEADER]: Connected to new peer: Peer: permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
I20250623 14:08:39.723304 8853 consensus_queue.cc:237] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } }
I20250623 14:08:39.727653 8989 raft_consensus.cc:1273] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 2 FOLLOWER]: Refusing update from remote peer 559c0ec04ad84e79986e59ea400ac9dd: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250623 14:08:39.731986 9189 consensus_queue.cc:1035] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [LEADER]: Connected to new peer: Peer: permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20250623 14:08:39.739476 9192 raft_consensus.cc:2953] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } }
I20250623 14:08:39.754388 8989 raft_consensus.cc:2953] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } }
I20250623 14:08:39.776909 8695 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 51f609bc37a94f64ad7219b32595819b with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250623 14:08:39.778627 8710 catalog_manager.cc:5582] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd reported cstate change: config changed from index -1 to 207, VOTER 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3) evicted. New cstate: current_term: 2 leader_uuid: "559c0ec04ad84e79986e59ea400ac9dd" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } health_report { overall_health: HEALTHY } } }
I20250623 14:08:39.801116 8853 consensus_queue.cc:237] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:39.803740 9212 raft_consensus.cc:2953] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } } }
I20250623 14:08:39.809365 9111 tablet_service.cc:1515] Processing DeleteTablet for tablet 51f609bc37a94f64ad7219b32595819b with delete_type TABLET_DATA_TOMBSTONED (TS 0934df7e51594c3fbf25e74bf3d4404c not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:58166
I20250623 14:08:39.818912 8695 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 51f609bc37a94f64ad7219b32595819b with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250623 14:08:39.824357 8710 catalog_manager.cc:5582] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd reported cstate change: config changed from index 207 to 208, VOTER 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2) evicted. New cstate: current_term: 2 leader_uuid: "559c0ec04ad84e79986e59ea400ac9dd" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } health_report { overall_health: HEALTHY } } }
I20250623 14:08:39.825232 9217 tablet_replica.cc:331] stopping tablet replica
I20250623 14:08:39.826265 9217 raft_consensus.cc:2241] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 2 FOLLOWER]: Raft consensus shutting down.
I20250623 14:08:39.826859 9217 raft_consensus.cc:2270] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c [term 2 FOLLOWER]: Raft consensus is shut down!
I20250623 14:08:39.852023 9217 ts_tablet_manager.cc:1905] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250623 14:08:39.859681 8969 tablet_service.cc:1515] Processing DeleteTablet for tablet 51f609bc37a94f64ad7219b32595819b with delete_type TABLET_DATA_TOMBSTONED (TS 1386ca6603524bb28bfa1f42a60e7e91 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:36998
I20250623 14:08:39.866897 9217 ts_tablet_manager.cc:1918] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.206
I20250623 14:08:39.867307 9217 log.cc:1199] T 51f609bc37a94f64ad7219b32595819b P 0934df7e51594c3fbf25e74bf3d4404c: Deleting WAL directory at /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/wal/wals/51f609bc37a94f64ad7219b32595819b
I20250623 14:08:39.869091 8695 catalog_manager.cc:4928] TS 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3:43197): tablet 51f609bc37a94f64ad7219b32595819b (table pre_rebuild [id=2f9f2f0e69964959b8b7e7224d7da704]) successfully deleted
I20250623 14:08:39.869880 9219 tablet_replica.cc:331] stopping tablet replica
I20250623 14:08:39.870651 9219 raft_consensus.cc:2241] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250623 14:08:39.871328 9219 raft_consensus.cc:2270] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250623 14:08:39.896238 9219 ts_tablet_manager.cc:1905] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250623 14:08:39.911058 9219 ts_tablet_manager.cc:1918] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250623 14:08:39.911463 9219 log.cc:1199] T 51f609bc37a94f64ad7219b32595819b P 1386ca6603524bb28bfa1f42a60e7e91: Deleting WAL directory at /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/wal/wals/51f609bc37a94f64ad7219b32595819b
I20250623 14:08:39.913256 8695 catalog_manager.cc:4928] TS 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863): tablet 51f609bc37a94f64ad7219b32595819b (table pre_rebuild [id=2f9f2f0e69964959b8b7e7224d7da704]) successfully deleted
W20250623 14:08:39.992722 2360 scanner-internal.cc:458] Time spent opening tablet: real 2.434s user 0.005s sys 0.005s
I20250623 14:08:40.601971 8969 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250623 14:08:40.608122 8833 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250623 14:08:40.608162 9111 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+-------------------+---------
f312bdd0c4b247849ed02009fb7562cf | 127.2.78.62:34837 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.78.20:36191 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+------------------+---------+----------+----------------+-----------------
0934df7e51594c3fbf25e74bf3d4404c | 127.2.78.3:43197 | HEALTHY | <none> | 0 | 0
1386ca6603524bb28bfa1f42a60e7e91 | 127.2.78.2:39863 | HEALTHY | <none> | 0 | 0
559c0ec04ad84e79986e59ea400ac9dd | 127.2.78.1:46593 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.2.78.1 | experimental | 127.2.78.1:46593
local_ip_for_outbound_sockets | 127.2.78.2 | experimental | 127.2.78.2:39863
local_ip_for_outbound_sockets | 127.2.78.3 | experimental | 127.2.78.3:43197
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb | hidden | 127.2.78.1:46593
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb | hidden | 127.2.78.2:39863
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb | hidden | 127.2.78.3:43197
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.78.20:36191 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.18.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250623 14:08:40.868871 2360 log_verifier.cc:126] Checking tablet 51f609bc37a94f64ad7219b32595819b
I20250623 14:08:41.120859 2360 log_verifier.cc:177] Verified matching terms for 208 ops in tablet 51f609bc37a94f64ad7219b32595819b
I20250623 14:08:41.123600 8710 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:40988:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250623 14:08:41.124223 8710 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:40988:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250623 14:08:41.138453 8710 catalog_manager.cc:5869] T 00000000000000000000000000000000 P f312bdd0c4b247849ed02009fb7562cf: Sending DeleteTablet for 1 replicas of tablet 51f609bc37a94f64ad7219b32595819b
I20250623 14:08:41.140237 2360 test_util.cc:276] Using random seed: -1078251532
I20250623 14:08:41.140092 8833 tablet_service.cc:1515] Processing DeleteTablet for tablet 51f609bc37a94f64ad7219b32595819b with delete_type TABLET_DATA_DELETED (Table deleted at 2025-06-23 14:08:41 UTC) from {username='slave'} at 127.0.0.1:50316
I20250623 14:08:41.141985 9250 tablet_replica.cc:331] stopping tablet replica
I20250623 14:08:41.142850 9250 raft_consensus.cc:2241] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 LEADER]: Raft consensus shutting down.
I20250623 14:08:41.143501 9250 raft_consensus.cc:2270] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd [term 2 FOLLOWER]: Raft consensus is shut down!
I20250623 14:08:41.180658 9250 ts_tablet_manager.cc:1905] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250623 14:08:41.195767 9250 ts_tablet_manager.cc:1918] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250623 14:08:41.196321 9250 log.cc:1199] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Deleting WAL directory at /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/wal/wals/51f609bc37a94f64ad7219b32595819b
I20250623 14:08:41.197470 9250 ts_tablet_manager.cc:1939] T 51f609bc37a94f64ad7219b32595819b P 559c0ec04ad84e79986e59ea400ac9dd: Deleting consensus metadata
I20250623 14:08:41.200403 8695 catalog_manager.cc:4928] TS 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593): tablet 51f609bc37a94f64ad7219b32595819b (table pre_rebuild [id=2f9f2f0e69964959b8b7e7224d7da704]) successfully deleted
I20250623 14:08:41.228824 8710 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41508:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250623 14:08:41.232280 8710 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250623 14:08:41.260179 9111 tablet_service.cc:1468] Processing CreateTablet for tablet 1ecb946225a24439bcf267dae52ff31b (DEFAULT_TABLE table=post_rebuild [id=0508ecf3c702410498a957d7fc17ea99]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:41.261489 9111 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1ecb946225a24439bcf267dae52ff31b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:41.262233 8969 tablet_service.cc:1468] Processing CreateTablet for tablet 1ecb946225a24439bcf267dae52ff31b (DEFAULT_TABLE table=post_rebuild [id=0508ecf3c702410498a957d7fc17ea99]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:41.263558 8969 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1ecb946225a24439bcf267dae52ff31b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:41.266937 8833 tablet_service.cc:1468] Processing CreateTablet for tablet 1ecb946225a24439bcf267dae52ff31b (DEFAULT_TABLE table=post_rebuild [id=0508ecf3c702410498a957d7fc17ea99]), partition=RANGE (key) PARTITION UNBOUNDED
I20250623 14:08:41.268221 8833 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1ecb946225a24439bcf267dae52ff31b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250623 14:08:41.284613 9257 tablet_bootstrap.cc:492] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91: Bootstrap starting.
I20250623 14:08:41.291878 9257 tablet_bootstrap.cc:654] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:41.298396 9259 tablet_bootstrap.cc:492] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c: Bootstrap starting.
I20250623 14:08:41.301726 9258 tablet_bootstrap.cc:492] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd: Bootstrap starting.
I20250623 14:08:41.303102 9257 tablet_bootstrap.cc:492] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91: No bootstrap required, opened a new log
I20250623 14:08:41.303617 9257 ts_tablet_manager.cc:1397] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91: Time spent bootstrapping tablet: real 0.019s user 0.013s sys 0.004s
I20250623 14:08:41.306993 9257 raft_consensus.cc:357] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.307768 9257 raft_consensus.cc:383] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:41.307988 9258 tablet_bootstrap.cc:654] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:41.308133 9257 raft_consensus.cc:738] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1386ca6603524bb28bfa1f42a60e7e91, State: Initialized, Role: FOLLOWER
I20250623 14:08:41.308977 9257 consensus_queue.cc:260] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.311101 9259 tablet_bootstrap.cc:654] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c: Neither blocks nor log segments found. Creating new log.
I20250623 14:08:41.317500 9258 tablet_bootstrap.cc:492] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd: No bootstrap required, opened a new log
I20250623 14:08:41.318123 9258 ts_tablet_manager.cc:1397] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd: Time spent bootstrapping tablet: real 0.017s user 0.003s sys 0.012s
I20250623 14:08:41.321564 9258 raft_consensus.cc:357] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.324301 9257 ts_tablet_manager.cc:1428] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91: Time spent starting tablet: real 0.020s user 0.004s sys 0.014s
I20250623 14:08:41.325047 9258 raft_consensus.cc:383] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:41.325361 9258 raft_consensus.cc:738] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 559c0ec04ad84e79986e59ea400ac9dd, State: Initialized, Role: FOLLOWER
I20250623 14:08:41.326010 9258 consensus_queue.cc:260] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.329619 9259 tablet_bootstrap.cc:492] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c: No bootstrap required, opened a new log
I20250623 14:08:41.330072 9259 ts_tablet_manager.cc:1397] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c: Time spent bootstrapping tablet: real 0.032s user 0.010s sys 0.011s
I20250623 14:08:41.337306 9258 ts_tablet_manager.cc:1428] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd: Time spent starting tablet: real 0.019s user 0.014s sys 0.000s
I20250623 14:08:41.333882 9259 raft_consensus.cc:357] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.338137 9259 raft_consensus.cc:383] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250623 14:08:41.338419 9259 raft_consensus.cc:738] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0934df7e51594c3fbf25e74bf3d4404c, State: Initialized, Role: FOLLOWER
I20250623 14:08:41.339054 9259 consensus_queue.cc:260] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.342257 9259 ts_tablet_manager.cc:1428] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c: Time spent starting tablet: real 0.012s user 0.007s sys 0.001s
I20250623 14:08:41.366542 9265 raft_consensus.cc:491] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250623 14:08:41.366997 9265 raft_consensus.cc:513] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.369145 9265 leader_election.cc:290] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863), 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593)
W20250623 14:08:41.382817 9035 tablet.cc:2378] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:08:41.395345 8853 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1ecb946225a24439bcf267dae52ff31b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "559c0ec04ad84e79986e59ea400ac9dd" is_pre_election: true
I20250623 14:08:41.396051 8853 raft_consensus.cc:2466] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 0.
I20250623 14:08:41.397330 9058 leader_election.cc:304] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0934df7e51594c3fbf25e74bf3d4404c, 559c0ec04ad84e79986e59ea400ac9dd; no voters:
I20250623 14:08:41.398188 9265 raft_consensus.cc:2802] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250623 14:08:41.398167 8989 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1ecb946225a24439bcf267dae52ff31b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91" is_pre_election: true
I20250623 14:08:41.398516 9265 raft_consensus.cc:491] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250623 14:08:41.398829 9265 raft_consensus.cc:3058] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:41.398756 8989 raft_consensus.cc:2466] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 0.
I20250623 14:08:41.403972 9265 raft_consensus.cc:513] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.405325 9265 leader_election.cc:290] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 election: Requested vote from peers 1386ca6603524bb28bfa1f42a60e7e91 (127.2.78.2:39863), 559c0ec04ad84e79986e59ea400ac9dd (127.2.78.1:46593)
I20250623 14:08:41.406430 8989 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1ecb946225a24439bcf267dae52ff31b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1386ca6603524bb28bfa1f42a60e7e91"
I20250623 14:08:41.406471 8853 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "1ecb946225a24439bcf267dae52ff31b" candidate_uuid: "0934df7e51594c3fbf25e74bf3d4404c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "559c0ec04ad84e79986e59ea400ac9dd"
I20250623 14:08:41.406826 8989 raft_consensus.cc:3058] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:41.406975 8853 raft_consensus.cc:3058] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 0 FOLLOWER]: Advancing to term 1
I20250623 14:08:41.411474 8989 raft_consensus.cc:2466] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 1.
I20250623 14:08:41.412263 9058 leader_election.cc:304] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0934df7e51594c3fbf25e74bf3d4404c, 1386ca6603524bb28bfa1f42a60e7e91; no voters:
I20250623 14:08:41.412810 9265 raft_consensus.cc:2802] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 FOLLOWER]: Leader election won for term 1
I20250623 14:08:41.413826 8853 raft_consensus.cc:2466] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0934df7e51594c3fbf25e74bf3d4404c in term 1.
I20250623 14:08:41.414862 9265 raft_consensus.cc:695] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [term 1 LEADER]: Becoming Leader. State: Replica: 0934df7e51594c3fbf25e74bf3d4404c, State: Running, Role: LEADER
I20250623 14:08:41.415747 9265 consensus_queue.cc:237] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } }
I20250623 14:08:41.423607 8710 catalog_manager.cc:5582] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c reported cstate change: term changed from 0 to 1, leader changed from <none> to 0934df7e51594c3fbf25e74bf3d4404c (127.2.78.3). New cstate: current_term: 1 leader_uuid: "0934df7e51594c3fbf25e74bf3d4404c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0934df7e51594c3fbf25e74bf3d4404c" member_type: VOTER last_known_addr { host: "127.2.78.3" port: 43197 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 } health_report { overall_health: UNKNOWN } } }
W20250623 14:08:41.429050 8900 tablet.cc:2378] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250623 14:08:41.489161 9179 tablet.cc:2378] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250623 14:08:41.604560 8989 raft_consensus.cc:1273] T 1ecb946225a24439bcf267dae52ff31b P 1386ca6603524bb28bfa1f42a60e7e91 [term 1 FOLLOWER]: Refusing update from remote peer 0934df7e51594c3fbf25e74bf3d4404c: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250623 14:08:41.606228 9270 consensus_queue.cc:1035] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [LEADER]: Connected to new peer: Peer: permanent_uuid: "1386ca6603524bb28bfa1f42a60e7e91" member_type: VOTER last_known_addr { host: "127.2.78.2" port: 39863 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250623 14:08:41.606292 8853 raft_consensus.cc:1273] T 1ecb946225a24439bcf267dae52ff31b P 559c0ec04ad84e79986e59ea400ac9dd [term 1 FOLLOWER]: Refusing update from remote peer 0934df7e51594c3fbf25e74bf3d4404c: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250623 14:08:41.608417 9265 consensus_queue.cc:1035] T 1ecb946225a24439bcf267dae52ff31b P 0934df7e51594c3fbf25e74bf3d4404c [LEADER]: Connected to new peer: Peer: permanent_uuid: "559c0ec04ad84e79986e59ea400ac9dd" member_type: VOTER last_known_addr { host: "127.2.78.1" port: 46593 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250623 14:08:41.644218 9280 mvcc.cc:204] Tried to move back new op lower bound from 7170816907676237824 to 7170816906925584384. Current Snapshot: MvccSnapshot[applied={T|T < 7170816907676237824}]
I20250623 14:08:41.644985 9279 mvcc.cc:204] Tried to move back new op lower bound from 7170816907676237824 to 7170816906925584384. Current Snapshot: MvccSnapshot[applied={T|T < 7170816907676237824}]
I20250623 14:08:41.697584 9281 mvcc.cc:204] Tried to move back new op lower bound from 7170816907676237824 to 7170816906925584384. Current Snapshot: MvccSnapshot[applied={T|T < 7170816907676237824}]
I20250623 14:08:46.560514 9111 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250623 14:08:46.563508 8969 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250623 14:08:46.565929 8833 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+-------------------+---------
f312bdd0c4b247849ed02009fb7562cf | 127.2.78.62:34837 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.78.20:36191 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+------------------+---------+----------+----------------+-----------------
0934df7e51594c3fbf25e74bf3d4404c | 127.2.78.3:43197 | HEALTHY | <none> | 1 | 0
1386ca6603524bb28bfa1f42a60e7e91 | 127.2.78.2:39863 | HEALTHY | <none> | 0 | 0
559c0ec04ad84e79986e59ea400ac9dd | 127.2.78.1:46593 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.2.78.1 | experimental | 127.2.78.1:46593
local_ip_for_outbound_sockets | 127.2.78.2 | experimental | 127.2.78.2:39863
local_ip_for_outbound_sockets | 127.2.78.3 | experimental | 127.2.78.3:43197
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-0/data/info.pb | hidden | 127.2.78.1:46593
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-1/data/info.pb | hidden | 127.2.78.2:39863
server_dump_info_path | /tmp/dist-test-task0z5RNj/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1750687545053178-2360-0/minicluster-data/ts-2/data/info.pb | hidden | 127.2.78.3:43197
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.78.20:36191 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.18.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250623 14:08:46.774778 2360 log_verifier.cc:126] Checking tablet 1ecb946225a24439bcf267dae52ff31b
I20250623 14:08:47.564097 2360 log_verifier.cc:177] Verified matching terms for 205 ops in tablet 1ecb946225a24439bcf267dae52ff31b
I20250623 14:08:47.564973 2360 log_verifier.cc:126] Checking tablet 51f609bc37a94f64ad7219b32595819b
I20250623 14:08:47.565284 2360 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 51f609bc37a94f64ad7219b32595819b
I20250623 14:08:47.590888 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8748
I20250623 14:08:47.635471 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8903
I20250623 14:08:47.678288 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 9038
I20250623 14:08:47.721643 2360 external_mini_cluster.cc:1620] Killing /tmp/dist-test-task0z5RNj/build/tsan/bin/kudu with pid 8678
2025-06-23T14:08:47Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (35409 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (35409 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (182661 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST