Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250812 01:52:03.226197 2345 test_util.cc:276] Using random seed: 1231374809
W20250812 01:52:04.463956 2345 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.194s user 0.438s sys 0.753s
W20250812 01:52:04.464355 2345 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.195s user 0.438s sys 0.753s
I20250812 01:52:04.473533 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:52:04.473731 2345 ts_itest-base.cc:116] --------------
I20250812 01:52:04.473917 2345 ts_itest-base.cc:117] 4 tablet servers
I20250812 01:52:04.474092 2345 ts_itest-base.cc:118] 3 replicas per TS
I20250812 01:52:04.474264 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:52:04Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:52:04Z Disabled control of system clock
I20250812 01:52:04.510450 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:46593
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:38071
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:46593 with env {}
W20250812 01:52:04.808070 2359 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:04.808702 2359 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:04.809156 2359 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:04.840224 2359 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:52:04.840543 2359 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:04.840821 2359 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:52:04.841079 2359 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:52:04.875603 2359 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:38071
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:46593
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:46593
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:04.877005 2359 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:04.878568 2359 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:04.890121 2365 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:04.890241 2366 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:06.292904 2364 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 2359
W20250812 01:52:06.687827 2359 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.798s user 0.672s sys 1.114s
W20250812 01:52:06.688756 2367 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1798 milliseconds
W20250812 01:52:06.689014 2359 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.800s user 0.673s sys 1.114s
I20250812 01:52:06.689785 2359 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250812 01:52:06.689872 2368 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:06.692953 2359 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:06.695319 2359 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:06.696687 2359 hybrid_clock.cc:648] HybridClock initialized: now 1754963526696653 us; error 49 us; skew 500 ppm
I20250812 01:52:06.697535 2359 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:06.703429 2359 webserver.cc:489] Webserver started at http://127.2.74.126:41893/ using document root <none> and password file <none>
I20250812 01:52:06.704353 2359 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:06.704571 2359 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:06.705056 2359 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:06.709408 2359 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "a5492d66817849be9f419cf0c573c0f8"
format_stamp: "Formatted at 2025-08-12 01:52:06 on dist-test-slave-3nxt"
I20250812 01:52:06.710476 2359 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "a5492d66817849be9f419cf0c573c0f8"
format_stamp: "Formatted at 2025-08-12 01:52:06 on dist-test-slave-3nxt"
I20250812 01:52:06.717586 2359 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.002s sys 0.006s
I20250812 01:52:06.722977 2375 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:06.724002 2359 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.001s
I20250812 01:52:06.724313 2359 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a5492d66817849be9f419cf0c573c0f8"
format_stamp: "Formatted at 2025-08-12 01:52:06 on dist-test-slave-3nxt"
I20250812 01:52:06.724678 2359 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:06.779227 2359 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:06.780694 2359 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:06.781133 2359 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:06.850972 2359 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:46593
I20250812 01:52:06.851042 2426 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:46593 every 8 connection(s)
I20250812 01:52:06.853689 2359 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:52:06.858728 2427 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:06.859833 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 2359
I20250812 01:52:06.860240 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:52:06.877905 2427 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Bootstrap starting.
I20250812 01:52:06.883226 2427 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:06.884928 2427 log.cc:826] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:06.889878 2427 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: No bootstrap required, opened a new log
I20250812 01:52:06.908809 2427 raft_consensus.cc:357] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:06.909492 2427 raft_consensus.cc:383] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:06.909713 2427 raft_consensus.cc:738] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a5492d66817849be9f419cf0c573c0f8, State: Initialized, Role: FOLLOWER
I20250812 01:52:06.910358 2427 consensus_queue.cc:260] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:06.910843 2427 raft_consensus.cc:397] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:52:06.911098 2427 raft_consensus.cc:491] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:52:06.911404 2427 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:06.915381 2427 raft_consensus.cc:513] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:06.916077 2427 leader_election.cc:304] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a5492d66817849be9f419cf0c573c0f8; no voters:
I20250812 01:52:06.917704 2427 leader_election.cc:290] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:52:06.918318 2432 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:06.920550 2432 raft_consensus.cc:695] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 LEADER]: Becoming Leader. State: Replica: a5492d66817849be9f419cf0c573c0f8, State: Running, Role: LEADER
I20250812 01:52:06.922227 2427 sys_catalog.cc:564] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:52:06.922197 2432 consensus_queue.cc:237] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:06.933833 2434 sys_catalog.cc:455] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a5492d66817849be9f419cf0c573c0f8. Latest consensus state: current_term: 1 leader_uuid: "a5492d66817849be9f419cf0c573c0f8" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } } }
I20250812 01:52:06.934964 2434 sys_catalog.cc:458] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:06.938236 2433 sys_catalog.cc:455] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "a5492d66817849be9f419cf0c573c0f8" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } } }
I20250812 01:52:06.938741 2433 sys_catalog.cc:458] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:06.939949 2442 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:52:06.950726 2442 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:52:06.968461 2442 catalog_manager.cc:1349] Generated new cluster ID: 72c2f5539c4a4fcba6a134a9301288ce
I20250812 01:52:06.968888 2442 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:52:06.998819 2442 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:52:07.000947 2442 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:52:07.022521 2442 catalog_manager.cc:5955] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Generated new TSK 0
I20250812 01:52:07.023444 2442 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:52:07.050027 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--builtin_ntp_servers=127.2.74.84:38071
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250812 01:52:07.353816 2451 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:07.354370 2451 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:07.354873 2451 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:07.387336 2451 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:07.388536 2451 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:52:07.424160 2451 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:38071
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:07.425539 2451 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:07.427107 2451 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:07.439899 2457 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:08.843466 2456 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2451
W20250812 01:52:07.441293 2458 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:09.203006 2451 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.762s user 0.571s sys 1.145s
W20250812 01:52:09.204668 2451 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.763s user 0.571s sys 1.145s
W20250812 01:52:09.204787 2459 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1763 milliseconds
I20250812 01:52:09.205391 2451 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250812 01:52:09.205425 2460 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:09.209951 2451 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:09.212159 2451 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:09.213546 2451 hybrid_clock.cc:648] HybridClock initialized: now 1754963529213494 us; error 63 us; skew 500 ppm
I20250812 01:52:09.214370 2451 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:09.220525 2451 webserver.cc:489] Webserver started at http://127.2.74.65:39699/ using document root <none> and password file <none>
I20250812 01:52:09.221541 2451 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:09.221758 2451 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:09.222203 2451 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:09.226625 2451 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "d5484e5a1adb4030a8725eee2f58e864"
format_stamp: "Formatted at 2025-08-12 01:52:09 on dist-test-slave-3nxt"
I20250812 01:52:09.227710 2451 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "d5484e5a1adb4030a8725eee2f58e864"
format_stamp: "Formatted at 2025-08-12 01:52:09 on dist-test-slave-3nxt"
I20250812 01:52:09.235224 2451 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250812 01:52:09.241140 2467 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:09.242273 2451 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250812 01:52:09.242578 2451 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "d5484e5a1adb4030a8725eee2f58e864"
format_stamp: "Formatted at 2025-08-12 01:52:09 on dist-test-slave-3nxt"
I20250812 01:52:09.242900 2451 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:09.299603 2451 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:09.301048 2451 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:09.301524 2451 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:09.304194 2451 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:09.308509 2451 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:09.308755 2451 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.001s
I20250812 01:52:09.309028 2451 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:09.309181 2451 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:09.471376 2451 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:41715
I20250812 01:52:09.471544 2579 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:41715 every 8 connection(s)
I20250812 01:52:09.474452 2451 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:52:09.477043 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 2451
I20250812 01:52:09.477586 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:52:09.489512 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--builtin_ntp_servers=127.2.74.84:38071
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:52:09.517354 2580 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
I20250812 01:52:09.517868 2580 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:09.519138 2580 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:09.522164 2392 ts_manager.cc:194] Registered new tserver with Master: d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715)
I20250812 01:52:09.525035 2392 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:50761
W20250812 01:52:09.794018 2584 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:09.794477 2584 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:09.795207 2584 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:09.827862 2584 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:09.828680 2584 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:52:09.862730 2584 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:38071
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:09.864041 2584 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:09.865564 2584 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:09.877744 2590 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:10.529013 2580 heartbeater.cc:499] Master 127.2.74.126:46593 was elected leader, sending a full tablet report...
W20250812 01:52:09.878795 2591 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:11.639624 2584 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.762s user 0.630s sys 1.131s
W20250812 01:52:11.640017 2584 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.762s user 0.630s sys 1.132s
W20250812 01:52:11.281134 2589 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2584
W20250812 01:52:11.642261 2593 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:11.645530 2592 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1767 milliseconds
I20250812 01:52:11.645570 2584 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:11.646739 2584 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:11.648815 2584 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:11.650151 2584 hybrid_clock.cc:648] HybridClock initialized: now 1754963531650122 us; error 47 us; skew 500 ppm
I20250812 01:52:11.650892 2584 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:11.656898 2584 webserver.cc:489] Webserver started at http://127.2.74.66:39759/ using document root <none> and password file <none>
I20250812 01:52:11.657872 2584 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:11.658126 2584 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:11.658567 2584 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:11.663029 2584 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "a9a78e29c1934ce6a95ec3162de22063"
format_stamp: "Formatted at 2025-08-12 01:52:11 on dist-test-slave-3nxt"
I20250812 01:52:11.664155 2584 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "a9a78e29c1934ce6a95ec3162de22063"
format_stamp: "Formatted at 2025-08-12 01:52:11 on dist-test-slave-3nxt"
I20250812 01:52:11.671118 2584 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250812 01:52:11.676748 2600 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:11.677784 2584 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250812 01:52:11.678093 2584 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "a9a78e29c1934ce6a95ec3162de22063"
format_stamp: "Formatted at 2025-08-12 01:52:11 on dist-test-slave-3nxt"
I20250812 01:52:11.678426 2584 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:11.732206 2584 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:11.733714 2584 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:11.734148 2584 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:11.736704 2584 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:11.740988 2584 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:11.741214 2584 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:11.741484 2584 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:11.741645 2584 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:11.877950 2584 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:32893
I20250812 01:52:11.878021 2712 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:32893 every 8 connection(s)
I20250812 01:52:11.881502 2584 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:52:11.889600 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 2584
I20250812 01:52:11.889968 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250812 01:52:11.896068 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--builtin_ntp_servers=127.2.74.84:38071
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:52:11.903352 2713 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
I20250812 01:52:11.903759 2713 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:11.904799 2713 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:11.906778 2392 ts_manager.cc:194] Registered new tserver with Master: a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893)
I20250812 01:52:11.908034 2392 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:41173
W20250812 01:52:12.193122 2717 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:12.193605 2717 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:12.194082 2717 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:12.225864 2717 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:12.226727 2717 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:52:12.262408 2717 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:38071
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:12.263741 2717 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:12.265381 2717 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:12.278421 2723 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:12.910980 2713 heartbeater.cc:499] Master 127.2.74.126:46593 was elected leader, sending a full tablet report...
W20250812 01:52:12.278895 2724 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:13.649645 2726 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:13.651886 2725 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1369 milliseconds
W20250812 01:52:13.652873 2717 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.375s user 0.478s sys 0.889s
W20250812 01:52:13.653246 2717 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.375s user 0.478s sys 0.889s
I20250812 01:52:13.653546 2717 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:13.655043 2717 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:13.658244 2717 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:13.659727 2717 hybrid_clock.cc:648] HybridClock initialized: now 1754963533659676 us; error 70 us; skew 500 ppm
I20250812 01:52:13.660605 2717 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:13.667922 2717 webserver.cc:489] Webserver started at http://127.2.74.67:36711/ using document root <none> and password file <none>
I20250812 01:52:13.668993 2717 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:13.669214 2717 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:13.669643 2717 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:13.674206 2717 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "c5eafe9f3f174e129e56831515823c69"
format_stamp: "Formatted at 2025-08-12 01:52:13 on dist-test-slave-3nxt"
I20250812 01:52:13.675292 2717 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "c5eafe9f3f174e129e56831515823c69"
format_stamp: "Formatted at 2025-08-12 01:52:13 on dist-test-slave-3nxt"
I20250812 01:52:13.683298 2717 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.000s sys 0.006s
I20250812 01:52:13.689388 2734 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:13.690544 2717 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250812 01:52:13.690868 2717 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "c5eafe9f3f174e129e56831515823c69"
format_stamp: "Formatted at 2025-08-12 01:52:13 on dist-test-slave-3nxt"
I20250812 01:52:13.691203 2717 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:13.766441 2717 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:13.767874 2717 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:13.768299 2717 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:13.770815 2717 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:13.774847 2717 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:13.775063 2717 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:13.775286 2717 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:13.775450 2717 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:13.915001 2717 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:41243
I20250812 01:52:13.915108 2846 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:41243 every 8 connection(s)
I20250812 01:52:13.917678 2717 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:52:13.928390 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 2717
I20250812 01:52:13.928972 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250812 01:52:13.935536 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.68:0
--local_ip_for_outbound_sockets=127.2.74.68
--webserver_interface=127.2.74.68
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--builtin_ntp_servers=127.2.74.84:38071
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:52:13.938122 2847 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
I20250812 01:52:13.938719 2847 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:13.940124 2847 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:13.942567 2392 ts_manager.cc:194] Registered new tserver with Master: c5eafe9f3f174e129e56831515823c69 (127.2.74.67:41243)
I20250812 01:52:13.943787 2392 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:59207
W20250812 01:52:14.255044 2851 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:14.255553 2851 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:14.256047 2851 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:14.287864 2851 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:14.288749 2851 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.68
I20250812 01:52:14.323086 2851 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:38071
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.68:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.2.74.68
--webserver_port=0
--tserver_master_addrs=127.2.74.126:46593
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.68
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:14.324419 2851 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:14.325940 2851 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:14.337482 2857 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:14.946772 2847 heartbeater.cc:499] Master 127.2.74.126:46593 was elected leader, sending a full tablet report...
W20250812 01:52:14.338644 2858 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:15.554890 2860 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:15.557855 2859 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1214 milliseconds
I20250812 01:52:15.558048 2851 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:15.559226 2851 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:15.561359 2851 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:15.562736 2851 hybrid_clock.cc:648] HybridClock initialized: now 1754963535562689 us; error 42 us; skew 500 ppm
I20250812 01:52:15.563544 2851 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:15.569577 2851 webserver.cc:489] Webserver started at http://127.2.74.68:34029/ using document root <none> and password file <none>
I20250812 01:52:15.570462 2851 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:15.570667 2851 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:15.571101 2851 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:15.575424 2851 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "1d9051b6101f47f2b41230942a2a80d4"
format_stamp: "Formatted at 2025-08-12 01:52:15 on dist-test-slave-3nxt"
I20250812 01:52:15.576980 2851 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "1d9051b6101f47f2b41230942a2a80d4"
format_stamp: "Formatted at 2025-08-12 01:52:15 on dist-test-slave-3nxt"
I20250812 01:52:15.584333 2851 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.002s sys 0.004s
I20250812 01:52:15.589855 2868 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:15.590850 2851 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250812 01:52:15.591153 2851 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "1d9051b6101f47f2b41230942a2a80d4"
format_stamp: "Formatted at 2025-08-12 01:52:15 on dist-test-slave-3nxt"
I20250812 01:52:15.591471 2851 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:15.636085 2851 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:15.637521 2851 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:15.637929 2851 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:15.640515 2851 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:15.644461 2851 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:15.644675 2851 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250812 01:52:15.644915 2851 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:15.645067 2851 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:15.778020 2851 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.68:42515
I20250812 01:52:15.778126 2980 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.68:42515 every 8 connection(s)
I20250812 01:52:15.780489 2851 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250812 01:52:15.791193 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 2851
I20250812 01:52:15.791685 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250812 01:52:15.801676 2981 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
I20250812 01:52:15.802057 2981 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:15.802973 2981 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:15.805009 2392 ts_manager.cc:194] Registered new tserver with Master: 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:15.806257 2392 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.68:40853
I20250812 01:52:15.811614 2345 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250812 01:52:15.849865 2392 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59546:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250812 01:52:15.943455 2648 tablet_service.cc:1468] Processing CreateTablet for tablet 77712b55ba834c10ba2ce1296f532188 (DEFAULT_TABLE table=TestTable [id=0379d00b589b411f99befb352cadfd10]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:15.945407 2648 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 77712b55ba834c10ba2ce1296f532188. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:15.947031 2515 tablet_service.cc:1468] Processing CreateTablet for tablet 77712b55ba834c10ba2ce1296f532188 (DEFAULT_TABLE table=TestTable [id=0379d00b589b411f99befb352cadfd10]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:15.948375 2515 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 77712b55ba834c10ba2ce1296f532188. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:15.952731 2916 tablet_service.cc:1468] Processing CreateTablet for tablet 77712b55ba834c10ba2ce1296f532188 (DEFAULT_TABLE table=TestTable [id=0379d00b589b411f99befb352cadfd10]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:15.954631 2916 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 77712b55ba834c10ba2ce1296f532188. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:15.972625 3000 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Bootstrap starting.
I20250812 01:52:15.975656 3001 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: Bootstrap starting.
I20250812 01:52:15.977299 3002 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: Bootstrap starting.
I20250812 01:52:15.982936 3002 tablet_bootstrap.cc:654] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:15.983569 3000 tablet_bootstrap.cc:654] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:15.983789 3001 tablet_bootstrap.cc:654] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:15.984851 3002 log.cc:826] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:15.986049 3000 log.cc:826] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:15.986049 3001 log.cc:826] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:15.990296 3002 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: No bootstrap required, opened a new log
I20250812 01:52:15.990823 3002 ts_tablet_manager.cc:1397] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: Time spent bootstrapping tablet: real 0.014s user 0.011s sys 0.000s
I20250812 01:52:15.992141 3000 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: No bootstrap required, opened a new log
I20250812 01:52:15.992358 3001 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: No bootstrap required, opened a new log
I20250812 01:52:15.992693 3000 ts_tablet_manager.cc:1397] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Time spent bootstrapping tablet: real 0.021s user 0.017s sys 0.000s
I20250812 01:52:15.992812 3001 ts_tablet_manager.cc:1397] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: Time spent bootstrapping tablet: real 0.018s user 0.012s sys 0.004s
I20250812 01:52:16.010890 3001 raft_consensus.cc:357] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.011682 3001 raft_consensus.cc:383] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:16.012034 3001 raft_consensus.cc:738] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1d9051b6101f47f2b41230942a2a80d4, State: Initialized, Role: FOLLOWER
I20250812 01:52:16.012806 3001 consensus_queue.cc:260] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.018767 3002 raft_consensus.cc:357] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.019831 3002 raft_consensus.cc:383] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:16.020181 3002 raft_consensus.cc:738] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d5484e5a1adb4030a8725eee2f58e864, State: Initialized, Role: FOLLOWER
I20250812 01:52:16.019698 3000 raft_consensus.cc:357] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.020540 2981 heartbeater.cc:499] Master 127.2.74.126:46593 was elected leader, sending a full tablet report...
I20250812 01:52:16.020644 3000 raft_consensus.cc:383] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:16.021042 3000 raft_consensus.cc:738] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a9a78e29c1934ce6a95ec3162de22063, State: Initialized, Role: FOLLOWER
I20250812 01:52:16.021236 3002 consensus_queue.cc:260] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.022138 3001 ts_tablet_manager.cc:1428] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: Time spent starting tablet: real 0.029s user 0.022s sys 0.006s
I20250812 01:52:16.021914 3000 consensus_queue.cc:260] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.026363 3002 ts_tablet_manager.cc:1428] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: Time spent starting tablet: real 0.035s user 0.031s sys 0.004s
I20250812 01:52:16.029220 3000 ts_tablet_manager.cc:1428] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Time spent starting tablet: real 0.036s user 0.022s sys 0.013s
W20250812 01:52:16.034927 2982 tablet.cc:2378] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250812 01:52:16.141023 2714 tablet.cc:2378] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:16.204003 3007 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:52:16.204494 3007 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.206359 3008 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:52:16.206848 3007 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893), 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:16.206827 3008 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.212417 3008 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715), 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:16.224753 2668 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "d5484e5a1adb4030a8725eee2f58e864" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a9a78e29c1934ce6a95ec3162de22063" is_pre_election: true
I20250812 01:52:16.225729 2668 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d5484e5a1adb4030a8725eee2f58e864 in term 0.
I20250812 01:52:16.227250 2468 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a9a78e29c1934ce6a95ec3162de22063, d5484e5a1adb4030a8725eee2f58e864; no voters:
I20250812 01:52:16.227613 2936 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "d5484e5a1adb4030a8725eee2f58e864" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4" is_pre_election: true
I20250812 01:52:16.228224 3007 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:52:16.228461 2936 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d5484e5a1adb4030a8725eee2f58e864 in term 0.
I20250812 01:52:16.228292 2535 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d5484e5a1adb4030a8725eee2f58e864" is_pre_election: true
I20250812 01:52:16.228653 3007 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:52:16.229002 3007 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:16.233304 2936 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4" is_pre_election: true
I20250812 01:52:16.233718 2936 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a9a78e29c1934ce6a95ec3162de22063 in term 0.
I20250812 01:52:16.234056 3007 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.234566 2603 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4, a9a78e29c1934ce6a95ec3162de22063; no voters:
I20250812 01:52:16.235122 2535 raft_consensus.cc:2391] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate a9a78e29c1934ce6a95ec3162de22063 in current term 1: Already voted for candidate d5484e5a1adb4030a8725eee2f58e864 in this term.
I20250812 01:52:16.235242 3008 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:52:16.235550 3008 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:52:16.235867 3008 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:16.235890 3007 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [CANDIDATE]: Term 1 election: Requested vote from peers a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893), 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:16.236743 2668 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "d5484e5a1adb4030a8725eee2f58e864" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a9a78e29c1934ce6a95ec3162de22063"
I20250812 01:52:16.237792 2936 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "d5484e5a1adb4030a8725eee2f58e864" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4"
I20250812 01:52:16.238324 2936 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:16.241086 3008 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.242004 2668 raft_consensus.cc:2391] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate d5484e5a1adb4030a8725eee2f58e864 in current term 1: Already voted for candidate a9a78e29c1934ce6a95ec3162de22063 in this term.
I20250812 01:52:16.242846 3008 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 1 election: Requested vote from peers d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715), 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:16.243543 2535 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d5484e5a1adb4030a8725eee2f58e864"
I20250812 01:52:16.244263 2535 raft_consensus.cc:2391] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate a9a78e29c1934ce6a95ec3162de22063 in current term 1: Already voted for candidate d5484e5a1adb4030a8725eee2f58e864 in this term.
I20250812 01:52:16.244457 2936 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d5484e5a1adb4030a8725eee2f58e864 in term 1.
W20250812 01:52:16.244854 2581 tablet.cc:2378] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:16.245571 2470 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4, d5484e5a1adb4030a8725eee2f58e864; no voters:
I20250812 01:52:16.245913 2935 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4"
I20250812 01:52:16.246353 3007 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:16.246729 2935 raft_consensus.cc:2391] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate a9a78e29c1934ce6a95ec3162de22063 in current term 1: Already voted for candidate d5484e5a1adb4030a8725eee2f58e864 in this term.
I20250812 01:52:16.247807 2603 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 1 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: a9a78e29c1934ce6a95ec3162de22063; no voters: 1d9051b6101f47f2b41230942a2a80d4, d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:16.248472 3007 raft_consensus.cc:695] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [term 1 LEADER]: Becoming Leader. State: Replica: d5484e5a1adb4030a8725eee2f58e864, State: Running, Role: LEADER
I20250812 01:52:16.248749 3008 raft_consensus.cc:2747] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Leader election lost for term 1. Reason: could not achieve majority
I20250812 01:52:16.249369 3007 consensus_queue.cc:237] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:16.257727 2391 catalog_manager.cc:5582] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 reported cstate change: term changed from 0 to 1, leader changed from <none> to d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65). New cstate: current_term: 1 leader_uuid: "d5484e5a1adb4030a8725eee2f58e864" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } health_report { overall_health: UNKNOWN } } }
I20250812 01:52:16.300134 2345 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250812 01:52:16.303697 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d5484e5a1adb4030a8725eee2f58e864 to finish bootstrapping
I20250812 01:52:16.316642 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver a9a78e29c1934ce6a95ec3162de22063 to finish bootstrapping
I20250812 01:52:16.326686 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1d9051b6101f47f2b41230942a2a80d4 to finish bootstrapping
I20250812 01:52:16.337369 2345 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250812 01:52:16.340442 2345 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "77712b55ba834c10ba2ce1296f532188"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063"
rpc_addresses {
host: "127.2.74.66"
port: 32893
}
}
ts_infos {
permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864"
rpc_addresses {
host: "127.2.74.65"
port: 41715
}
}
ts_infos {
permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4"
rpc_addresses {
host: "127.2.74.68"
port: 42515
}
}
I20250812 01:52:16.668308 3007 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:52:16.684355 3022 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P d5484e5a1adb4030a8725eee2f58e864 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:52:16.686708 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 2451
W20250812 01:52:16.714558 2604 connection.cc:537] server connection from 127.2.74.65:35275 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250812 01:52:16.714969 2376 connection.cc:537] server connection from 127.2.74.65:50761 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250812 01:52:16.715746 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 2359
I20250812 01:52:16.742906 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:46593
--webserver_interface=127.2.74.126
--webserver_port=41893
--builtin_ntp_servers=127.2.74.84:38071
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:46593 with env {}
W20250812 01:52:16.958974 2847 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:46593 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:46593: connect: Connection refused (error 111)
W20250812 01:52:17.058789 3025 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:17.059363 3025 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:17.059834 3025 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:17.090850 3025 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:52:17.091167 3025 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:17.091431 3025 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:52:17.091665 3025 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:52:17.126749 3025 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:38071
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:46593
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:46593
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=41893
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:17.128057 3025 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:17.129614 3025 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:17.139181 3032 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:17.720715 2713 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:46593 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:46593: connect: Connection refused (error 111)
W20250812 01:52:17.728539 2981 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:46593 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:46593: connect: Connection refused (error 111)
I20250812 01:52:18.190351 3042 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader d5484e5a1adb4030a8725eee2f58e864)
I20250812 01:52:18.192413 3042 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
W20250812 01:52:18.198084 2601 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111)
I20250812 01:52:18.198606 3042 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715), 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:18.200757 2935 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4" is_pre_election: true
I20250812 01:52:18.201349 2935 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a9a78e29c1934ce6a95ec3162de22063 in term 1.
I20250812 01:52:18.202539 2603 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4, a9a78e29c1934ce6a95ec3162de22063; no voters:
I20250812 01:52:18.203231 3042 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250812 01:52:18.203742 3042 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Starting leader election (detected failure of leader d5484e5a1adb4030a8725eee2f58e864)
I20250812 01:52:18.204229 3042 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 1 FOLLOWER]: Advancing to term 2
W20250812 01:52:18.216567 2601 leader_election.cc:336] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715): Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111)
I20250812 01:52:18.218288 3041 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader d5484e5a1adb4030a8725eee2f58e864)
I20250812 01:52:18.218861 3041 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:18.219596 3042 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:18.224088 2935 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4"
I20250812 01:52:18.224956 2935 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:52:18.225155 3042 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 2 election: Requested vote from peers d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715), 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
W20250812 01:52:18.235273 2601 leader_election.cc:336] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715): Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111)
I20250812 01:52:18.237782 2935 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a9a78e29c1934ce6a95ec3162de22063 in term 2.
I20250812 01:52:18.239288 2603 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4, a9a78e29c1934ce6a95ec3162de22063; no voters: d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:18.240121 3042 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:52:18.250029 3041 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893), d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715)
I20250812 01:52:18.257992 3042 raft_consensus.cc:695] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 LEADER]: Becoming Leader. State: Replica: a9a78e29c1934ce6a95ec3162de22063, State: Running, Role: LEADER
I20250812 01:52:18.259426 3042 consensus_queue.cc:237] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
W20250812 01:52:18.266847 2869 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111)
W20250812 01:52:18.277952 2869 leader_election.cc:336] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715): Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111)
I20250812 01:52:18.283367 2668 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "1d9051b6101f47f2b41230942a2a80d4" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a9a78e29c1934ce6a95ec3162de22063" is_pre_election: true
I20250812 01:52:18.285496 2869 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4; no voters: a9a78e29c1934ce6a95ec3162de22063, d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:18.286648 3041 raft_consensus.cc:2747] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
W20250812 01:52:18.545173 3031 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 3025
W20250812 01:52:17.141471 3033 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:18.565353 3031 kernel_stack_watchdog.cc:198] Thread 3025 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:52:18.570616 3025 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.430s user 0.000s sys 0.002s
W20250812 01:52:18.570616 3034 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1425 milliseconds
W20250812 01:52:18.570955 3025 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.430s user 0.000s sys 0.002s
W20250812 01:52:18.572700 3035 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:18.572772 3025 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:18.573967 3025 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:18.576404 3025 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:18.577755 3025 hybrid_clock.cc:648] HybridClock initialized: now 1754963538577723 us; error 43 us; skew 500 ppm
I20250812 01:52:18.578540 3025 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:18.584640 3025 webserver.cc:489] Webserver started at http://127.2.74.126:41893/ using document root <none> and password file <none>
I20250812 01:52:18.585564 3025 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:18.585788 3025 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:18.594252 3025 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.004s
I20250812 01:52:18.598668 3054 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:18.599670 3025 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250812 01:52:18.599980 3025 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a5492d66817849be9f419cf0c573c0f8"
format_stamp: "Formatted at 2025-08-12 01:52:06 on dist-test-slave-3nxt"
I20250812 01:52:18.601941 3025 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:18.654336 3025 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:18.655756 3025 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:18.656200 3025 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:18.730685 3025 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:46593
I20250812 01:52:18.730774 3105 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:46593 every 8 connection(s)
I20250812 01:52:18.733776 3025 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:52:18.741551 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3025
I20250812 01:52:18.742151 2345 kudu-admin-test.cc:735] Forcing unsafe config change on tserver a9a78e29c1934ce6a95ec3162de22063
I20250812 01:52:18.748791 3107 sys_catalog.cc:263] Verifying existing consensus state
I20250812 01:52:18.755999 2935 raft_consensus.cc:1273] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Refusing update from remote peer a9a78e29c1934ce6a95ec3162de22063: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250812 01:52:18.756841 2981 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
I20250812 01:52:18.757706 3107 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Bootstrap starting.
W20250812 01:52:18.757741 2601 consensus_peers.cc:489] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 -> Peer d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715): Couldn't send request to peer d5484e5a1adb4030a8725eee2f58e864. Status: Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250812 01:52:18.757781 3042 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250812 01:52:18.801424 2713 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
I20250812 01:52:18.818293 3107 log.cc:826] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:18.842226 3107 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:52:18.843041 3107 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Bootstrap complete.
I20250812 01:52:18.861794 3107 raft_consensus.cc:357] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:18.863948 3107 raft_consensus.cc:738] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a5492d66817849be9f419cf0c573c0f8, State: Initialized, Role: FOLLOWER
I20250812 01:52:18.864710 3107 consensus_queue.cc:260] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:18.865243 3107 raft_consensus.cc:397] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:52:18.865525 3107 raft_consensus.cc:491] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:52:18.865839 3107 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:52:18.871591 3107 raft_consensus.cc:513] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:18.872305 3107 leader_election.cc:304] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a5492d66817849be9f419cf0c573c0f8; no voters:
I20250812 01:52:18.874567 3107 leader_election.cc:290] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250812 01:52:18.874989 3116 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:52:18.884073 3116 raft_consensus.cc:695] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [term 2 LEADER]: Becoming Leader. State: Replica: a5492d66817849be9f419cf0c573c0f8, State: Running, Role: LEADER
I20250812 01:52:18.885133 3116 consensus_queue.cc:237] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } }
I20250812 01:52:18.885542 3107 sys_catalog.cc:564] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:52:18.909206 3117 sys_catalog.cc:455] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "a5492d66817849be9f419cf0c573c0f8" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } } }
I20250812 01:52:18.909997 3117 sys_catalog.cc:458] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:18.909206 3118 sys_catalog.cc:455] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a5492d66817849be9f419cf0c573c0f8. Latest consensus state: current_term: 2 leader_uuid: "a5492d66817849be9f419cf0c573c0f8" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a5492d66817849be9f419cf0c573c0f8" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 46593 } } }
I20250812 01:52:18.911566 3118 sys_catalog.cc:458] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8 [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:18.926000 3123 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:52:18.947378 3123 catalog_manager.cc:671] Loaded metadata for table TestTable [id=0379d00b589b411f99befb352cadfd10]
I20250812 01:52:18.965287 3123 tablet_loader.cc:96] loaded metadata for tablet 77712b55ba834c10ba2ce1296f532188 (table TestTable [id=0379d00b589b411f99befb352cadfd10])
I20250812 01:52:18.966898 3123 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:52:18.972000 3123 catalog_manager.cc:1261] Loaded cluster ID: 72c2f5539c4a4fcba6a134a9301288ce
I20250812 01:52:18.972401 3123 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:52:18.991214 3123 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:52:18.999946 3123 catalog_manager.cc:5966] T 00000000000000000000000000000000 P a5492d66817849be9f419cf0c573c0f8: Loaded TSK: 0
I20250812 01:52:19.007901 3123 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:52:19.011086 2847 heartbeater.cc:344] Connected to a master server at 127.2.74.126:46593
W20250812 01:52:19.151157 3110 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:19.151757 3110 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:19.184450 3110 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250812 01:52:19.794924 3071 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" instance_seqno: 1754963535747138) as {username='slave'} at 127.2.74.68:56839; Asking this server to re-register.
I20250812 01:52:19.796762 2981 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:19.797364 2981 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:19.801096 3071 ts_manager.cc:194] Registered new tserver with Master: 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:19.809406 3070 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" instance_seqno: 1754963531844230) as {username='slave'} at 127.2.74.66:54215; Asking this server to re-register.
I20250812 01:52:19.811041 2713 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:19.811662 2713 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:19.814744 3070 ts_manager.cc:194] Registered new tserver with Master: a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893)
I20250812 01:52:19.805663 3071 catalog_manager.cc:5582] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 reported cstate change: term changed from 1 to 2, leader changed from d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65) to a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66). New cstate: current_term: 2 leader_uuid: "a9a78e29c1934ce6a95ec3162de22063" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } }
I20250812 01:52:20.016232 3070 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "c5eafe9f3f174e129e56831515823c69" instance_seqno: 1754963533883510) as {username='slave'} at 127.2.74.67:42557; Asking this server to re-register.
I20250812 01:52:20.017848 2847 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:20.018476 2847 heartbeater.cc:507] Master 127.2.74.126:46593 requested a full tablet report, sending...
I20250812 01:52:20.020640 3070 ts_manager.cc:194] Registered new tserver with Master: c5eafe9f3f174e129e56831515823c69 (127.2.74.67:41243)
W20250812 01:52:20.631804 3140 debug-util.cc:398] Leaking SignalData structure 0x7b08000347a0 after lost signal to thread 3110
W20250812 01:52:20.632310 3140 kernel_stack_watchdog.cc:198] Thread 3110 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:52:20.972617 3110 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.744s user 0.599s sys 1.075s
W20250812 01:52:21.089834 3110 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.862s user 0.602s sys 1.085s
I20250812 01:52:21.130810 2668 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "a9a78e29c1934ce6a95ec3162de22063"
tablet_id: "77712b55ba834c10ba2ce1296f532188"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4"
}
peers {
permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063"
}
}
from {username='slave'} at 127.0.0.1:49860
W20250812 01:52:21.132026 2668 raft_consensus.cc:2216] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 LEADER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true
I20250812 01:52:21.132997 2668 raft_consensus.cc:3053] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 LEADER]: Stepping down as leader of term 2
I20250812 01:52:21.133199 2668 raft_consensus.cc:738] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 LEADER]: Becoming Follower/Learner. State: Replica: a9a78e29c1934ce6a95ec3162de22063, State: Running, Role: LEADER
I20250812 01:52:21.133821 2668 consensus_queue.cc:260] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:21.134708 2668 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 2 FOLLOWER]: Advancing to term 3
W20250812 01:52:21.833865 3140 debug-util.cc:398] Leaking SignalData structure 0x7b08000379e0 after lost signal to thread 3110
W20250812 01:52:21.984896 2977 debug-util.cc:398] Leaking SignalData structure 0x7b0800041060 after lost signal to thread 2852
W20250812 01:52:21.985834 2977 debug-util.cc:398] Leaking SignalData structure 0x7b08000bc480 after lost signal to thread 2980
I20250812 01:52:22.482029 3169 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Starting pre-election (detected failure of leader a9a78e29c1934ce6a95ec3162de22063)
I20250812 01:52:22.482450 3169 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } }
I20250812 01:52:22.483858 3169 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893), d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715)
I20250812 01:52:22.484848 2668 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "1d9051b6101f47f2b41230942a2a80d4" candidate_term: 3 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "a9a78e29c1934ce6a95ec3162de22063" is_pre_election: true
W20250812 01:52:22.489401 2869 leader_election.cc:336] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65:41715): Network error: Client connection negotiation failed: client connection to 127.2.74.65:41715: connect: Connection refused (error 111)
I20250812 01:52:22.489872 2869 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4; no voters: a9a78e29c1934ce6a95ec3162de22063, d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:22.490631 3169 raft_consensus.cc:2747] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250812 01:52:22.641268 3172 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250812 01:52:22.641677 3172 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true
I20250812 01:52:22.642776 3172 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:22.643985 2935 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4" is_pre_election: true
I20250812 01:52:22.644447 2935 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a9a78e29c1934ce6a95ec3162de22063 in term 2.
I20250812 01:52:22.645516 2603 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4, a9a78e29c1934ce6a95ec3162de22063; no voters:
I20250812 01:52:22.646107 3172 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250812 01:52:22.646371 3172 raft_consensus.cc:491] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250812 01:52:22.646634 3172 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 3 FOLLOWER]: Advancing to term 4
I20250812 01:52:22.650766 3172 raft_consensus.cc:513] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true
I20250812 01:52:22.651764 3172 leader_election.cc:290] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 4 election: Requested vote from peers 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68:42515)
I20250812 01:52:22.652891 2935 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "77712b55ba834c10ba2ce1296f532188" candidate_uuid: "a9a78e29c1934ce6a95ec3162de22063" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "1d9051b6101f47f2b41230942a2a80d4"
I20250812 01:52:22.653297 2935 raft_consensus.cc:3058] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 2 FOLLOWER]: Advancing to term 4
I20250812 01:52:22.657421 2935 raft_consensus.cc:2466] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a9a78e29c1934ce6a95ec3162de22063 in term 4.
I20250812 01:52:22.658290 2603 leader_election.cc:304] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 1d9051b6101f47f2b41230942a2a80d4, a9a78e29c1934ce6a95ec3162de22063; no voters:
I20250812 01:52:22.658865 3172 raft_consensus.cc:2802] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 4 FOLLOWER]: Leader election won for term 4
I20250812 01:52:22.659883 3172 raft_consensus.cc:695] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 4 LEADER]: Becoming Leader. State: Replica: a9a78e29c1934ce6a95ec3162de22063, State: Running, Role: LEADER
I20250812 01:52:22.660611 3172 consensus_queue.cc:237] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true
I20250812 01:52:22.667407 3070 catalog_manager.cc:5582] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 reported cstate change: term changed from 2 to 4, now has a pending config: VOTER a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66), VOTER 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68). New cstate: current_term: 4 leader_uuid: "a9a78e29c1934ce6a95ec3162de22063" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d5484e5a1adb4030a8725eee2f58e864" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 41715 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } health_report { overall_health: UNKNOWN } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true }
I20250812 01:52:23.046871 2935 raft_consensus.cc:1273] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Refusing update from remote peer a9a78e29c1934ce6a95ec3162de22063: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250812 01:52:23.048122 3172 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250812 01:52:23.056825 3173 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true }
I20250812 01:52:23.059567 2935 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true }
I20250812 01:52:23.071547 3070 catalog_manager.cc:5582] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 reported cstate change: config changed from index -1 to 3, VOTER d5484e5a1adb4030a8725eee2f58e864 (127.2.74.65) evicted, no longer has a pending config: VOTER a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66), VOTER 1d9051b6101f47f2b41230942a2a80d4 (127.2.74.68). New cstate: current_term: 4 leader_uuid: "a9a78e29c1934ce6a95ec3162de22063" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } unsafe_config_change: true }
W20250812 01:52:23.080803 3070 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 77712b55ba834c10ba2ce1296f532188 on TS d5484e5a1adb4030a8725eee2f58e864: Not found: failed to reset TS proxy: Could not find TS for UUID d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:23.106389 2668 consensus_queue.cc:237] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: true } } unsafe_config_change: true
I20250812 01:52:23.112562 2935 raft_consensus.cc:1273] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Refusing update from remote peer a9a78e29c1934ce6a95ec3162de22063: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250812 01:52:23.114650 3172 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.001s
I20250812 01:52:23.122216 3173 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER c5eafe9f3f174e129e56831515823c69 (127.2.74.67) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: true } } unsafe_config_change: true }
I20250812 01:52:23.124078 2936 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER c5eafe9f3f174e129e56831515823c69 (127.2.74.67) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: true } } unsafe_config_change: true }
W20250812 01:52:23.134352 3056 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 77712b55ba834c10ba2ce1296f532188 on TS d5484e5a1adb4030a8725eee2f58e864 failed: Not found: failed to reset TS proxy: Could not find TS for UUID d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:23.136224 3055 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 77712b55ba834c10ba2ce1296f532188 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250812 01:52:23.141620 3071 catalog_manager.cc:5582] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 reported cstate change: config changed from index 3 to 5, NON_VOTER c5eafe9f3f174e129e56831515823c69 (127.2.74.67) added. New cstate: current_term: 4 leader_uuid: "a9a78e29c1934ce6a95ec3162de22063" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: true } } unsafe_config_change: true }
W20250812 01:52:23.144685 2601 consensus_peers.cc:489] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 -> Peer c5eafe9f3f174e129e56831515823c69 (127.2.74.67:41243): Couldn't send request to peer c5eafe9f3f174e129e56831515823c69. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 77712b55ba834c10ba2ce1296f532188. This is attempt 1: this message will repeat every 5th retry.
I20250812 01:52:23.556782 3188 ts_tablet_manager.cc:927] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Initiating tablet copy from peer a9a78e29c1934ce6a95ec3162de22063 (127.2.74.66:32893)
I20250812 01:52:23.559538 3188 tablet_copy_client.cc:323] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: tablet copy: Beginning tablet copy session from remote peer at address 127.2.74.66:32893
I20250812 01:52:23.572458 2688 tablet_copy_service.cc:140] P a9a78e29c1934ce6a95ec3162de22063: Received BeginTabletCopySession request for tablet 77712b55ba834c10ba2ce1296f532188 from peer c5eafe9f3f174e129e56831515823c69 ({username='slave'} at 127.2.74.67:35171)
I20250812 01:52:23.573150 2688 tablet_copy_service.cc:161] P a9a78e29c1934ce6a95ec3162de22063: Beginning new tablet copy session on tablet 77712b55ba834c10ba2ce1296f532188 from peer c5eafe9f3f174e129e56831515823c69 at {username='slave'} at 127.2.74.67:35171: session id = c5eafe9f3f174e129e56831515823c69-77712b55ba834c10ba2ce1296f532188
I20250812 01:52:23.581809 2688 tablet_copy_source_session.cc:215] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: Tablet Copy: opened 0 blocks and 1 log segments
I20250812 01:52:23.587765 3188 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 77712b55ba834c10ba2ce1296f532188. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:23.609917 3188 tablet_copy_client.cc:806] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: tablet copy: Starting download of 0 data blocks...
I20250812 01:52:23.610567 3188 tablet_copy_client.cc:670] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: tablet copy: Starting download of 1 WAL segments...
I20250812 01:52:23.614897 3188 tablet_copy_client.cc:538] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250812 01:52:23.620791 3188 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Bootstrap starting.
I20250812 01:52:23.632941 3188 log.cc:826] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:23.649260 3188 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:52:23.650322 3188 tablet_bootstrap.cc:492] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Bootstrap complete.
I20250812 01:52:23.651166 3188 ts_tablet_manager.cc:1397] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Time spent bootstrapping tablet: real 0.031s user 0.027s sys 0.005s
I20250812 01:52:23.679770 3188 raft_consensus.cc:357] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69 [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: true } } unsafe_config_change: true
I20250812 01:52:23.681042 3188 raft_consensus.cc:738] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69 [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: c5eafe9f3f174e129e56831515823c69, State: Initialized, Role: LEARNER
I20250812 01:52:23.681900 3188 consensus_queue.cc:260] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: true } } unsafe_config_change: true
I20250812 01:52:23.686101 3188 ts_tablet_manager.cc:1428] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69: Time spent starting tablet: real 0.035s user 0.036s sys 0.000s
I20250812 01:52:23.689843 2688 tablet_copy_service.cc:342] P a9a78e29c1934ce6a95ec3162de22063: Request end of tablet copy session c5eafe9f3f174e129e56831515823c69-77712b55ba834c10ba2ce1296f532188 received from {username='slave'} at 127.2.74.67:35171
I20250812 01:52:23.690315 2688 tablet_copy_service.cc:434] P a9a78e29c1934ce6a95ec3162de22063: ending tablet copy session c5eafe9f3f174e129e56831515823c69-77712b55ba834c10ba2ce1296f532188 on tablet 77712b55ba834c10ba2ce1296f532188 with peer c5eafe9f3f174e129e56831515823c69
I20250812 01:52:24.121613 2802 raft_consensus.cc:1215] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69 [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250812 01:52:24.301697 3056 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 77712b55ba834c10ba2ce1296f532188 on TS d5484e5a1adb4030a8725eee2f58e864 failed: Not found: failed to reset TS proxy: Could not find TS for UUID d5484e5a1adb4030a8725eee2f58e864
I20250812 01:52:24.567278 3194 raft_consensus.cc:1062] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063: attempting to promote NON_VOTER c5eafe9f3f174e129e56831515823c69 to VOTER
I20250812 01:52:24.568817 3194 consensus_queue.cc:237] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: false } } unsafe_config_change: true
I20250812 01:52:24.573603 2802 raft_consensus.cc:1273] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69 [term 4 LEARNER]: Refusing update from remote peer a9a78e29c1934ce6a95ec3162de22063: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250812 01:52:24.574093 2936 raft_consensus.cc:1273] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Refusing update from remote peer a9a78e29c1934ce6a95ec3162de22063: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250812 01:52:24.574878 3193 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250812 01:52:24.575845 3194 consensus_queue.cc:1035] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250812 01:52:24.583632 3194 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, c5eafe9f3f174e129e56831515823c69 (127.2.74.67) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: false } } unsafe_config_change: true }
I20250812 01:52:24.585443 2936 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P 1d9051b6101f47f2b41230942a2a80d4 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, c5eafe9f3f174e129e56831515823c69 (127.2.74.67) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: false } } unsafe_config_change: true }
I20250812 01:52:24.585708 2802 raft_consensus.cc:2953] T 77712b55ba834c10ba2ce1296f532188 P c5eafe9f3f174e129e56831515823c69 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, c5eafe9f3f174e129e56831515823c69 (127.2.74.67) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: false } } unsafe_config_change: true }
I20250812 01:52:24.594977 3071 catalog_manager.cc:5582] T 77712b55ba834c10ba2ce1296f532188 P a9a78e29c1934ce6a95ec3162de22063 reported cstate change: config changed from index 5 to 6, c5eafe9f3f174e129e56831515823c69 (127.2.74.67) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "a9a78e29c1934ce6a95ec3162de22063" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 32893 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4" member_type: VOTER last_known_addr { host: "127.2.74.68" port: 42515 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "c5eafe9f3f174e129e56831515823c69" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 41243 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250812 01:52:24.690601 2345 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250812 01:52:24.703274 2345 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "77712b55ba834c10ba2ce1296f532188"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: LEADER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "a9a78e29c1934ce6a95ec3162de22063"
rpc_addresses {
host: "127.2.74.66"
port: 32893
}
}
ts_infos {
permanent_uuid: "1d9051b6101f47f2b41230942a2a80d4"
rpc_addresses {
host: "127.2.74.68"
port: 42515
}
}
ts_infos {
permanent_uuid: "c5eafe9f3f174e129e56831515823c69"
rpc_addresses {
host: "127.2.74.67"
port: 41243
}
}
I20250812 01:52:24.706401 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 2584
I20250812 01:52:24.739745 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 2717
I20250812 01:52:24.777297 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 2851
I20250812 01:52:24.805281 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3025
2025-08-12T01:52:24Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (21644 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250812 01:52:24.868516 2345 test_util.cc:276] Using random seed: 1253017226
I20250812 01:52:24.874480 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:52:24.874650 2345 ts_itest-base.cc:116] --------------
I20250812 01:52:24.874818 2345 ts_itest-base.cc:117] 3 tablet servers
I20250812 01:52:24.874962 2345 ts_itest-base.cc:118] 3 replicas per TS
I20250812 01:52:24.875099 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:52:24Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:52:24Z Disabled control of system clock
I20250812 01:52:24.913024 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:42819
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:34801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:42819
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250812 01:52:25.213358 3213 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:25.213917 3213 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:25.214323 3213 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:25.246733 3213 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:52:25.247021 3213 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:25.247222 3213 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:52:25.247416 3213 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:52:25.283123 3213 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:34801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:42819
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:42819
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:25.284409 3213 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:25.286083 3213 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:25.297498 3219 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:26.700760 3218 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 3213
W20250812 01:52:27.094511 3218 kernel_stack_watchdog.cc:198] Thread 3213 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:52:27.095055 3213 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.798s user 0.703s sys 1.094s
W20250812 01:52:27.095485 3213 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.798s user 0.703s sys 1.094s
W20250812 01:52:25.298010 3220 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:27.096083 3221 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1797 milliseconds
W20250812 01:52:27.097165 3222 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:27.097143 3213 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:27.100435 3213 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:27.102882 3213 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:27.104210 3213 hybrid_clock.cc:648] HybridClock initialized: now 1754963547104173 us; error 53 us; skew 500 ppm
I20250812 01:52:27.105037 3213 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:27.111017 3213 webserver.cc:489] Webserver started at http://127.2.74.126:34735/ using document root <none> and password file <none>
I20250812 01:52:27.111963 3213 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:27.112162 3213 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:27.112648 3213 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:27.117060 3213 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "ba4a18342f5646a1a540c2963b1b360a"
format_stamp: "Formatted at 2025-08-12 01:52:27 on dist-test-slave-3nxt"
I20250812 01:52:27.118153 3213 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "ba4a18342f5646a1a540c2963b1b360a"
format_stamp: "Formatted at 2025-08-12 01:52:27 on dist-test-slave-3nxt"
I20250812 01:52:27.125291 3213 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250812 01:52:27.130729 3229 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:27.131775 3213 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.003s
I20250812 01:52:27.132117 3213 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "ba4a18342f5646a1a540c2963b1b360a"
format_stamp: "Formatted at 2025-08-12 01:52:27 on dist-test-slave-3nxt"
I20250812 01:52:27.132468 3213 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:27.183336 3213 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:27.184808 3213 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:27.185236 3213 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:27.257313 3213 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:42819
I20250812 01:52:27.257377 3280 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:42819 every 8 connection(s)
I20250812 01:52:27.259940 3213 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:52:27.264755 3281 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:27.264726 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3213
I20250812 01:52:27.265077 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:52:27.289800 3281 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a: Bootstrap starting.
I20250812 01:52:27.296304 3281 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:27.297973 3281 log.cc:826] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:27.302451 3281 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a: No bootstrap required, opened a new log
I20250812 01:52:27.320539 3281 raft_consensus.cc:357] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba4a18342f5646a1a540c2963b1b360a" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42819 } }
I20250812 01:52:27.321215 3281 raft_consensus.cc:383] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:27.321415 3281 raft_consensus.cc:738] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ba4a18342f5646a1a540c2963b1b360a, State: Initialized, Role: FOLLOWER
I20250812 01:52:27.322019 3281 consensus_queue.cc:260] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba4a18342f5646a1a540c2963b1b360a" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42819 } }
I20250812 01:52:27.322491 3281 raft_consensus.cc:397] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:52:27.322757 3281 raft_consensus.cc:491] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:52:27.323076 3281 raft_consensus.cc:3058] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:27.327543 3281 raft_consensus.cc:513] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba4a18342f5646a1a540c2963b1b360a" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42819 } }
I20250812 01:52:27.328251 3281 leader_election.cc:304] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ba4a18342f5646a1a540c2963b1b360a; no voters:
I20250812 01:52:27.329885 3281 leader_election.cc:290] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:52:27.330572 3286 raft_consensus.cc:2802] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:27.332891 3286 raft_consensus.cc:695] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [term 1 LEADER]: Becoming Leader. State: Replica: ba4a18342f5646a1a540c2963b1b360a, State: Running, Role: LEADER
I20250812 01:52:27.333734 3286 consensus_queue.cc:237] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba4a18342f5646a1a540c2963b1b360a" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42819 } }
I20250812 01:52:27.334031 3281 sys_catalog.cc:564] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:52:27.343150 3287 sys_catalog.cc:455] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "ba4a18342f5646a1a540c2963b1b360a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba4a18342f5646a1a540c2963b1b360a" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42819 } } }
I20250812 01:52:27.343832 3287 sys_catalog.cc:458] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:27.344005 3288 sys_catalog.cc:455] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [sys.catalog]: SysCatalogTable state changed. Reason: New leader ba4a18342f5646a1a540c2963b1b360a. Latest consensus state: current_term: 1 leader_uuid: "ba4a18342f5646a1a540c2963b1b360a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba4a18342f5646a1a540c2963b1b360a" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42819 } } }
I20250812 01:52:27.344799 3288 sys_catalog.cc:458] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:27.349429 3295 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:52:27.361151 3295 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:52:27.375715 3295 catalog_manager.cc:1349] Generated new cluster ID: de16381f2470460bbcb87363a4edab6e
I20250812 01:52:27.375972 3295 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:52:27.400439 3295 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:52:27.401854 3295 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:52:27.419100 3295 catalog_manager.cc:5955] T 00000000000000000000000000000000 P ba4a18342f5646a1a540c2963b1b360a: Generated new TSK 0
I20250812 01:52:27.419963 3295 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:52:27.429400 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42819
--builtin_ntp_servers=127.2.74.84:34801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250812 01:52:27.731874 3305 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250812 01:52:27.732515 3305 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:27.732820 3305 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:27.733299 3305 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:27.765333 3305 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:27.766346 3305 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:52:27.802855 3305 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:34801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42819
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:27.804222 3305 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:27.805876 3305 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:27.819866 3311 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:27.819933 3312 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:29.577756 3313 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1756 milliseconds
W20250812 01:52:29.576882 3305 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.757s user 0.601s sys 1.085s
W20250812 01:52:29.221562 3310 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 3305
W20250812 01:52:29.578639 3305 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.759s user 0.601s sys 1.085s
I20250812 01:52:29.579079 3305 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250812 01:52:29.579137 3314 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:29.583021 3305 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:29.585265 3305 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:29.586661 3305 hybrid_clock.cc:648] HybridClock initialized: now 1754963549586633 us; error 49 us; skew 500 ppm
I20250812 01:52:29.587476 3305 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:29.593746 3305 webserver.cc:489] Webserver started at http://127.2.74.65:46171/ using document root <none> and password file <none>
I20250812 01:52:29.594719 3305 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:29.594947 3305 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:29.595395 3305 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:29.599942 3305 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "25713a5c96dd4e39aec5f036f1adcd51"
format_stamp: "Formatted at 2025-08-12 01:52:29 on dist-test-slave-3nxt"
I20250812 01:52:29.601138 3305 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "25713a5c96dd4e39aec5f036f1adcd51"
format_stamp: "Formatted at 2025-08-12 01:52:29 on dist-test-slave-3nxt"
I20250812 01:52:29.608924 3305 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250812 01:52:29.615044 3321 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:29.616274 3305 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250812 01:52:29.616674 3305 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "25713a5c96dd4e39aec5f036f1adcd51"
format_stamp: "Formatted at 2025-08-12 01:52:29 on dist-test-slave-3nxt"
I20250812 01:52:29.617023 3305 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:29.673177 3305 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:29.674715 3305 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:29.675158 3305 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:29.678301 3305 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:29.682672 3305 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:29.682906 3305 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:29.683163 3305 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:29.683346 3305 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:29.860910 3305 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:35445
I20250812 01:52:29.861106 3433 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:35445 every 8 connection(s)
I20250812 01:52:29.863624 3305 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:52:29.869287 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3305
I20250812 01:52:29.869693 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:52:29.877969 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42819
--builtin_ntp_servers=127.2.74.84:34801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250812 01:52:29.894619 3434 heartbeater.cc:344] Connected to a master server at 127.2.74.126:42819
I20250812 01:52:29.895151 3434 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:29.896580 3434 heartbeater.cc:507] Master 127.2.74.126:42819 requested a full tablet report, sending...
I20250812 01:52:29.899962 3246 ts_manager.cc:194] Registered new tserver with Master: 25713a5c96dd4e39aec5f036f1adcd51 (127.2.74.65:35445)
I20250812 01:52:29.902750 3246 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:56157
W20250812 01:52:30.206022 3438 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250812 01:52:30.206686 3438 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:30.206955 3438 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:30.207465 3438 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:30.239511 3438 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:30.240396 3438 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:52:30.288167 3438 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:34801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42819
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:30.289634 3438 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:30.291630 3438 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:30.305696 3444 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:30.907543 3434 heartbeater.cc:499] Master 127.2.74.126:42819 was elected leader, sending a full tablet report...
W20250812 01:52:30.305848 3445 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:31.576756 3438 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.272s user 0.403s sys 0.859s
W20250812 01:52:31.577191 3446 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1267 milliseconds
W20250812 01:52:31.577320 3438 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.272s user 0.404s sys 0.859s
W20250812 01:52:31.578459 3447 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:31.578540 3438 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:31.579771 3438 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:31.582091 3438 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:31.583460 3438 hybrid_clock.cc:648] HybridClock initialized: now 1754963551583434 us; error 43 us; skew 500 ppm
I20250812 01:52:31.584280 3438 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:31.592214 3438 webserver.cc:489] Webserver started at http://127.2.74.66:33291/ using document root <none> and password file <none>
I20250812 01:52:31.593254 3438 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:31.593492 3438 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:31.593954 3438 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:31.598618 3438 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "5367384e14174e09a455f9f42fb76194"
format_stamp: "Formatted at 2025-08-12 01:52:31 on dist-test-slave-3nxt"
I20250812 01:52:31.599784 3438 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "5367384e14174e09a455f9f42fb76194"
format_stamp: "Formatted at 2025-08-12 01:52:31 on dist-test-slave-3nxt"
I20250812 01:52:31.608440 3438 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.009s sys 0.001s
I20250812 01:52:31.614706 3454 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:31.615999 3438 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250812 01:52:31.616375 3438 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "5367384e14174e09a455f9f42fb76194"
format_stamp: "Formatted at 2025-08-12 01:52:31 on dist-test-slave-3nxt"
I20250812 01:52:31.616775 3438 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:31.690198 3438 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:31.692319 3438 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:31.692993 3438 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:31.696702 3438 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:31.701406 3438 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:31.701630 3438 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:31.701835 3438 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:31.701968 3438 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:31.833947 3438 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:39883
I20250812 01:52:31.834059 3566 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:39883 every 8 connection(s)
I20250812 01:52:31.836573 3438 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:52:31.839910 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3438
I20250812 01:52:31.840382 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250812 01:52:31.847150 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42819
--builtin_ntp_servers=127.2.74.84:34801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250812 01:52:31.858116 3567 heartbeater.cc:344] Connected to a master server at 127.2.74.126:42819
I20250812 01:52:31.858683 3567 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:31.860015 3567 heartbeater.cc:507] Master 127.2.74.126:42819 requested a full tablet report, sending...
I20250812 01:52:31.862834 3246 ts_manager.cc:194] Registered new tserver with Master: 5367384e14174e09a455f9f42fb76194 (127.2.74.66:39883)
I20250812 01:52:31.864568 3246 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:37571
W20250812 01:52:32.152462 3571 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250812 01:52:32.153178 3571 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:32.153455 3571 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:32.153950 3571 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:32.185820 3571 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:32.186718 3571 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:52:32.222815 3571 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:34801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42819
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:32.224138 3571 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:32.225751 3571 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:32.237394 3577 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:32.868988 3567 heartbeater.cc:499] Master 127.2.74.126:42819 was elected leader, sending a full tablet report...
W20250812 01:52:32.238102 3578 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:33.511289 3580 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:33.514917 3571 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.277s user 0.452s sys 0.816s
W20250812 01:52:33.515435 3571 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.278s user 0.453s sys 0.817s
W20250812 01:52:33.524477 3579 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1284 milliseconds
I20250812 01:52:33.524502 3571 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:33.526278 3571 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:33.529357 3571 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:33.530929 3571 hybrid_clock.cc:648] HybridClock initialized: now 1754963553530891 us; error 41 us; skew 500 ppm
I20250812 01:52:33.532209 3571 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:33.541878 3571 webserver.cc:489] Webserver started at http://127.2.74.67:35031/ using document root <none> and password file <none>
I20250812 01:52:33.543320 3571 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:33.543648 3571 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:33.544283 3571 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:33.552462 3571 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "d3285b6fb0fb4dcdb9a555535d589b24"
format_stamp: "Formatted at 2025-08-12 01:52:33 on dist-test-slave-3nxt"
I20250812 01:52:33.554217 3571 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "d3285b6fb0fb4dcdb9a555535d589b24"
format_stamp: "Formatted at 2025-08-12 01:52:33 on dist-test-slave-3nxt"
I20250812 01:52:33.564929 3571 fs_manager.cc:696] Time spent creating directory manager: real 0.010s user 0.011s sys 0.000s
I20250812 01:52:33.572863 3588 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:33.574132 3571 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250812 01:52:33.574492 3571 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "d3285b6fb0fb4dcdb9a555535d589b24"
format_stamp: "Formatted at 2025-08-12 01:52:33 on dist-test-slave-3nxt"
I20250812 01:52:33.574832 3571 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:33.656371 3571 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:33.657920 3571 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:33.658339 3571 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:33.660856 3571 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:33.664942 3571 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:33.665159 3571 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:33.665366 3571 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:33.665500 3571 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:33.802577 3571 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:33603
I20250812 01:52:33.802697 3700 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:33603 every 8 connection(s)
I20250812 01:52:33.805197 3571 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:52:33.811815 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3571
I20250812 01:52:33.812559 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250812 01:52:33.834558 3701 heartbeater.cc:344] Connected to a master server at 127.2.74.126:42819
I20250812 01:52:33.835036 3701 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:33.836150 3701 heartbeater.cc:507] Master 127.2.74.126:42819 requested a full tablet report, sending...
I20250812 01:52:33.838568 3246 ts_manager.cc:194] Registered new tserver with Master: d3285b6fb0fb4dcdb9a555535d589b24 (127.2.74.67:33603)
I20250812 01:52:33.840397 3246 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:58229
I20250812 01:52:33.849086 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:52:33.885059 3246 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:42186:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250812 01:52:33.902933 3246 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250812 01:52:33.951061 3369 tablet_service.cc:1468] Processing CreateTablet for tablet 06370b9273684401a39bcda852efdda8 (DEFAULT_TABLE table=TestTable [id=e1daf85567f04edd8e0f7da5ccdd7db2]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:33.951503 3502 tablet_service.cc:1468] Processing CreateTablet for tablet 06370b9273684401a39bcda852efdda8 (DEFAULT_TABLE table=TestTable [id=e1daf85567f04edd8e0f7da5ccdd7db2]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:33.952988 3369 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 06370b9273684401a39bcda852efdda8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:33.953498 3502 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 06370b9273684401a39bcda852efdda8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:33.953593 3636 tablet_service.cc:1468] Processing CreateTablet for tablet 06370b9273684401a39bcda852efdda8 (DEFAULT_TABLE table=TestTable [id=e1daf85567f04edd8e0f7da5ccdd7db2]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:33.955353 3636 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 06370b9273684401a39bcda852efdda8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:33.975849 3720 tablet_bootstrap.cc:492] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: Bootstrap starting.
I20250812 01:52:33.981945 3721 tablet_bootstrap.cc:492] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: Bootstrap starting.
I20250812 01:52:33.983428 3720 tablet_bootstrap.cc:654] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:33.984861 3722 tablet_bootstrap.cc:492] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: Bootstrap starting.
I20250812 01:52:33.986752 3720 log.cc:826] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:33.989665 3721 tablet_bootstrap.cc:654] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:33.990727 3722 tablet_bootstrap.cc:654] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:33.992552 3720 tablet_bootstrap.cc:492] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: No bootstrap required, opened a new log
I20250812 01:52:33.993017 3721 log.cc:826] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:33.993039 3722 log.cc:826] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:33.993098 3720 ts_tablet_manager.cc:1397] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: Time spent bootstrapping tablet: real 0.018s user 0.006s sys 0.010s
I20250812 01:52:33.998392 3722 tablet_bootstrap.cc:492] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: No bootstrap required, opened a new log
I20250812 01:52:33.998487 3721 tablet_bootstrap.cc:492] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: No bootstrap required, opened a new log
I20250812 01:52:33.998832 3722 ts_tablet_manager.cc:1397] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: Time spent bootstrapping tablet: real 0.014s user 0.008s sys 0.004s
I20250812 01:52:33.998972 3721 ts_tablet_manager.cc:1397] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: Time spent bootstrapping tablet: real 0.018s user 0.014s sys 0.000s
I20250812 01:52:34.018564 3722 raft_consensus.cc:357] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.019289 3722 raft_consensus.cc:738] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d3285b6fb0fb4dcdb9a555535d589b24, State: Initialized, Role: FOLLOWER
I20250812 01:52:34.019907 3722 consensus_queue.cc:260] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.021258 3720 raft_consensus.cc:357] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.022478 3720 raft_consensus.cc:738] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 25713a5c96dd4e39aec5f036f1adcd51, State: Initialized, Role: FOLLOWER
I20250812 01:52:34.023238 3720 consensus_queue.cc:260] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.026242 3721 raft_consensus.cc:357] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.027012 3701 heartbeater.cc:499] Master 127.2.74.126:42819 was elected leader, sending a full tablet report...
I20250812 01:52:34.027554 3721 raft_consensus.cc:738] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5367384e14174e09a455f9f42fb76194, State: Initialized, Role: FOLLOWER
I20250812 01:52:34.028477 3722 ts_tablet_manager.cc:1428] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: Time spent starting tablet: real 0.029s user 0.018s sys 0.011s
I20250812 01:52:34.028460 3721 consensus_queue.cc:260] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.032789 3720 ts_tablet_manager.cc:1428] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: Time spent starting tablet: real 0.039s user 0.033s sys 0.006s
I20250812 01:52:34.036060 3721 ts_tablet_manager.cc:1428] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: Time spent starting tablet: real 0.037s user 0.035s sys 0.004s
I20250812 01:52:34.056473 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250812 01:52:34.059326 3702 tablet.cc:2378] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:34.059692 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 25713a5c96dd4e39aec5f036f1adcd51 to finish bootstrapping
I20250812 01:52:34.074445 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5367384e14174e09a455f9f42fb76194 to finish bootstrapping
I20250812 01:52:34.084776 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d3285b6fb0fb4dcdb9a555535d589b24 to finish bootstrapping
W20250812 01:52:34.093544 3568 tablet.cc:2378] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:34.127393 3389 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "06370b9273684401a39bcda852efdda8"
dest_uuid: "25713a5c96dd4e39aec5f036f1adcd51"
from {username='slave'} at 127.0.0.1:60520
I20250812 01:52:34.127928 3389 raft_consensus.cc:491] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250812 01:52:34.128191 3389 raft_consensus.cc:3058] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:34.132493 3389 raft_consensus.cc:513] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
W20250812 01:52:34.134469 3435 tablet.cc:2378] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:34.136384 3389 leader_election.cc:290] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [CANDIDATE]: Term 1 election: Requested vote from peers 5367384e14174e09a455f9f42fb76194 (127.2.74.66:39883), d3285b6fb0fb4dcdb9a555535d589b24 (127.2.74.67:33603)
I20250812 01:52:34.146649 2345 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250812 01:52:34.148505 3656 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "06370b9273684401a39bcda852efdda8" candidate_uuid: "25713a5c96dd4e39aec5f036f1adcd51" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "d3285b6fb0fb4dcdb9a555535d589b24"
I20250812 01:52:34.149286 3656 raft_consensus.cc:3058] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:34.149183 3522 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "06370b9273684401a39bcda852efdda8" candidate_uuid: "25713a5c96dd4e39aec5f036f1adcd51" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "5367384e14174e09a455f9f42fb76194"
I20250812 01:52:34.149930 3522 raft_consensus.cc:3058] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:34.156392 3656 raft_consensus.cc:2466] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 25713a5c96dd4e39aec5f036f1adcd51 in term 1.
I20250812 01:52:34.156694 3522 raft_consensus.cc:2466] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 25713a5c96dd4e39aec5f036f1adcd51 in term 1.
I20250812 01:52:34.157812 3324 leader_election.cc:304] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 25713a5c96dd4e39aec5f036f1adcd51, d3285b6fb0fb4dcdb9a555535d589b24; no voters:
I20250812 01:52:34.158665 3727 raft_consensus.cc:2802] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:34.160334 3727 raft_consensus.cc:695] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 LEADER]: Becoming Leader. State: Replica: 25713a5c96dd4e39aec5f036f1adcd51, State: Running, Role: LEADER
I20250812 01:52:34.161127 3727 consensus_queue.cc:237] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:34.172315 3243 catalog_manager.cc:5582] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 reported cstate change: term changed from 0 to 1, leader changed from <none> to 25713a5c96dd4e39aec5f036f1adcd51 (127.2.74.65). New cstate: current_term: 1 leader_uuid: "25713a5c96dd4e39aec5f036f1adcd51" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } health_report { overall_health: UNKNOWN } } }
I20250812 01:52:34.252195 2345 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250812 01:52:34.457907 2345 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250812 01:52:34.677788 3735 consensus_queue.cc:1035] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:52:34.694089 3741 consensus_queue.cc:1035] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:52:36.515019 3389 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "06370b9273684401a39bcda852efdda8"
dest_uuid: "25713a5c96dd4e39aec5f036f1adcd51"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:52878
I20250812 01:52:36.515755 3389 raft_consensus.cc:604] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 LEADER]: Received request to transfer leadership
I20250812 01:52:36.917863 3741 raft_consensus.cc:991] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51: : Instructing follower d3285b6fb0fb4dcdb9a555535d589b24 to start an election
I20250812 01:52:36.918262 3756 raft_consensus.cc:1079] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 LEADER]: Signalling peer d3285b6fb0fb4dcdb9a555535d589b24 to start an election
I20250812 01:52:36.919637 3656 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "06370b9273684401a39bcda852efdda8"
dest_uuid: "d3285b6fb0fb4dcdb9a555535d589b24"
from {username='slave'} at 127.2.74.65:41305
I20250812 01:52:36.920153 3656 raft_consensus.cc:491] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250812 01:52:36.920423 3656 raft_consensus.cc:3058] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:52:36.924686 3656 raft_consensus.cc:513] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:36.926865 3656 leader_election.cc:290] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [CANDIDATE]: Term 2 election: Requested vote from peers 5367384e14174e09a455f9f42fb76194 (127.2.74.66:39883), 25713a5c96dd4e39aec5f036f1adcd51 (127.2.74.65:35445)
I20250812 01:52:36.939710 3389 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "06370b9273684401a39bcda852efdda8" candidate_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "25713a5c96dd4e39aec5f036f1adcd51"
I20250812 01:52:36.939956 3522 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "06370b9273684401a39bcda852efdda8" candidate_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "5367384e14174e09a455f9f42fb76194"
I20250812 01:52:36.940256 3389 raft_consensus.cc:3053] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 LEADER]: Stepping down as leader of term 1
I20250812 01:52:36.940418 3522 raft_consensus.cc:3058] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:52:36.940527 3389 raft_consensus.cc:738] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 25713a5c96dd4e39aec5f036f1adcd51, State: Running, Role: LEADER
I20250812 01:52:36.941110 3389 consensus_queue.cc:260] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:36.942010 3389 raft_consensus.cc:3058] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:52:36.944628 3522 raft_consensus.cc:2466] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d3285b6fb0fb4dcdb9a555535d589b24 in term 2.
I20250812 01:52:36.945711 3592 leader_election.cc:304] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5367384e14174e09a455f9f42fb76194, d3285b6fb0fb4dcdb9a555535d589b24; no voters:
I20250812 01:52:36.946390 3389 raft_consensus.cc:2466] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d3285b6fb0fb4dcdb9a555535d589b24 in term 2.
I20250812 01:52:36.947681 3769 raft_consensus.cc:2802] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:52:36.949059 3769 raft_consensus.cc:695] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [term 2 LEADER]: Becoming Leader. State: Replica: d3285b6fb0fb4dcdb9a555535d589b24, State: Running, Role: LEADER
I20250812 01:52:36.949899 3769 consensus_queue.cc:237] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } }
I20250812 01:52:36.958057 3244 catalog_manager.cc:5582] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 reported cstate change: term changed from 1 to 2, leader changed from 25713a5c96dd4e39aec5f036f1adcd51 (127.2.74.65) to d3285b6fb0fb4dcdb9a555535d589b24 (127.2.74.67). New cstate: current_term: 2 leader_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d3285b6fb0fb4dcdb9a555535d589b24" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33603 } health_report { overall_health: HEALTHY } } }
I20250812 01:52:37.446841 3389 raft_consensus.cc:1273] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 2 FOLLOWER]: Refusing update from remote peer d3285b6fb0fb4dcdb9a555535d589b24: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250812 01:52:37.448065 3769 consensus_queue.cc:1035] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [LEADER]: Connected to new peer: Peer: permanent_uuid: "25713a5c96dd4e39aec5f036f1adcd51" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 35445 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250812 01:52:37.460518 3522 raft_consensus.cc:1273] T 06370b9273684401a39bcda852efdda8 P 5367384e14174e09a455f9f42fb76194 [term 2 FOLLOWER]: Refusing update from remote peer d3285b6fb0fb4dcdb9a555535d589b24: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250812 01:52:37.462008 3778 consensus_queue.cc:1035] T 06370b9273684401a39bcda852efdda8 P d3285b6fb0fb4dcdb9a555535d589b24 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5367384e14174e09a455f9f42fb76194" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 39883 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250812 01:52:39.288532 3389 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "06370b9273684401a39bcda852efdda8"
dest_uuid: "25713a5c96dd4e39aec5f036f1adcd51"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:52882
I20250812 01:52:39.289271 3389 raft_consensus.cc:604] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 2 FOLLOWER]: Received request to transfer leadership
I20250812 01:52:39.289659 3389 raft_consensus.cc:612] T 06370b9273684401a39bcda852efdda8 P 25713a5c96dd4e39aec5f036f1adcd51 [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250812 01:52:40.329960 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3305
I20250812 01:52:40.355301 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3438
I20250812 01:52:40.380111 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3571
I20250812 01:52:40.406136 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3213
2025-08-12T01:52:40Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (15589 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250812 01:52:40.458112 2345 test_util.cc:276] Using random seed: 1268606822
I20250812 01:52:40.462107 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:52:40.462271 2345 ts_itest-base.cc:116] --------------
I20250812 01:52:40.462441 2345 ts_itest-base.cc:117] 3 tablet servers
I20250812 01:52:40.462576 2345 ts_itest-base.cc:118] 3 replicas per TS
I20250812 01:52:40.462709 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:52:40Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:52:40Z Disabled control of system clock
I20250812 01:52:40.500289 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:40875
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:39081
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:40875 with env {}
W20250812 01:52:40.808290 3813 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:40.808935 3813 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:40.809350 3813 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:40.840811 3813 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:52:40.841110 3813 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:40.841312 3813 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:52:40.841509 3813 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:52:40.877277 3813 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:39081
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:40875
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:40875
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:40.878537 3813 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:40.880103 3813 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:40.890779 3819 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:40.891297 3820 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:40.895185 3822 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:42.063614 3821 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250812 01:52:42.063699 3813 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:42.067613 3813 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:42.070739 3813 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:42.072192 3813 hybrid_clock.cc:648] HybridClock initialized: now 1754963562072152 us; error 57 us; skew 500 ppm
I20250812 01:52:42.073076 3813 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:42.079931 3813 webserver.cc:489] Webserver started at http://127.2.74.126:40797/ using document root <none> and password file <none>
I20250812 01:52:42.080919 3813 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:42.081142 3813 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:42.081625 3813 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:42.086118 3813 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "bf7125eb46384e34b62c58bc62590521"
format_stamp: "Formatted at 2025-08-12 01:52:42 on dist-test-slave-3nxt"
I20250812 01:52:42.087257 3813 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "bf7125eb46384e34b62c58bc62590521"
format_stamp: "Formatted at 2025-08-12 01:52:42 on dist-test-slave-3nxt"
I20250812 01:52:42.095108 3813 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.001s sys 0.008s
I20250812 01:52:42.100876 3829 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:42.101956 3813 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250812 01:52:42.102293 3813 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "bf7125eb46384e34b62c58bc62590521"
format_stamp: "Formatted at 2025-08-12 01:52:42 on dist-test-slave-3nxt"
I20250812 01:52:42.102619 3813 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:42.164423 3813 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:42.165962 3813 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:42.166409 3813 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:42.235486 3813 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:40875
I20250812 01:52:42.235550 3880 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:40875 every 8 connection(s)
I20250812 01:52:42.238323 3813 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:52:42.243376 3881 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:42.244036 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3813
I20250812 01:52:42.244385 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:52:42.269361 3881 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521: Bootstrap starting.
I20250812 01:52:42.274837 3881 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:42.276669 3881 log.cc:826] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:42.281082 3881 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521: No bootstrap required, opened a new log
I20250812 01:52:42.298585 3881 raft_consensus.cc:357] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "bf7125eb46384e34b62c58bc62590521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 40875 } }
I20250812 01:52:42.299273 3881 raft_consensus.cc:383] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:42.299522 3881 raft_consensus.cc:738] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: bf7125eb46384e34b62c58bc62590521, State: Initialized, Role: FOLLOWER
I20250812 01:52:42.300338 3881 consensus_queue.cc:260] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "bf7125eb46384e34b62c58bc62590521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 40875 } }
I20250812 01:52:42.301002 3881 raft_consensus.cc:397] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:52:42.301273 3881 raft_consensus.cc:491] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:52:42.301601 3881 raft_consensus.cc:3058] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:42.305707 3881 raft_consensus.cc:513] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "bf7125eb46384e34b62c58bc62590521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 40875 } }
I20250812 01:52:42.306435 3881 leader_election.cc:304] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: bf7125eb46384e34b62c58bc62590521; no voters:
I20250812 01:52:42.308111 3881 leader_election.cc:290] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:52:42.308851 3886 raft_consensus.cc:2802] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:42.310987 3886 raft_consensus.cc:695] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [term 1 LEADER]: Becoming Leader. State: Replica: bf7125eb46384e34b62c58bc62590521, State: Running, Role: LEADER
I20250812 01:52:42.311957 3886 consensus_queue.cc:237] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "bf7125eb46384e34b62c58bc62590521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 40875 } }
I20250812 01:52:42.313876 3881 sys_catalog.cc:564] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:52:42.322031 3888 sys_catalog.cc:455] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "bf7125eb46384e34b62c58bc62590521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "bf7125eb46384e34b62c58bc62590521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 40875 } } }
I20250812 01:52:42.322352 3887 sys_catalog.cc:455] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [sys.catalog]: SysCatalogTable state changed. Reason: New leader bf7125eb46384e34b62c58bc62590521. Latest consensus state: current_term: 1 leader_uuid: "bf7125eb46384e34b62c58bc62590521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "bf7125eb46384e34b62c58bc62590521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 40875 } } }
I20250812 01:52:42.322921 3888 sys_catalog.cc:458] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:42.323032 3887 sys_catalog.cc:458] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:52:42.327009 3894 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:52:42.338158 3894 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:52:42.354447 3894 catalog_manager.cc:1349] Generated new cluster ID: 857ee3a9a65546089ab15579436fd3ab
I20250812 01:52:42.354774 3894 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:52:42.378003 3894 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:52:42.379750 3894 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:52:42.397264 3894 catalog_manager.cc:5955] T 00000000000000000000000000000000 P bf7125eb46384e34b62c58bc62590521: Generated new TSK 0
I20250812 01:52:42.398661 3894 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:52:42.419993 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:40875
--builtin_ntp_servers=127.2.74.84:39081
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250812 01:52:42.733865 3905 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:42.734390 3905 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:42.734900 3905 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:42.767832 3905 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:42.768754 3905 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:52:42.804296 3905 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:39081
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:40875
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:42.805683 3905 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:42.807319 3905 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:42.820180 3911 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:44.228063 3910 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3905
W20250812 01:52:44.630703 3905 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.809s user 0.631s sys 0.982s
W20250812 01:52:42.822238 3912 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:44.631098 3905 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.810s user 0.631s sys 0.982s
W20250812 01:52:44.634112 3914 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:44.636811 3913 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1815 milliseconds
I20250812 01:52:44.636842 3905 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:44.638090 3905 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:44.640344 3905 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:44.641703 3905 hybrid_clock.cc:648] HybridClock initialized: now 1754963564641690 us; error 49 us; skew 500 ppm
I20250812 01:52:44.642489 3905 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:44.648627 3905 webserver.cc:489] Webserver started at http://127.2.74.65:45345/ using document root <none> and password file <none>
I20250812 01:52:44.649570 3905 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:44.649799 3905 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:44.650241 3905 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:44.654668 3905 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "1a18242e4afe4eec90f1ed0110e9f66a"
format_stamp: "Formatted at 2025-08-12 01:52:44 on dist-test-slave-3nxt"
I20250812 01:52:44.655810 3905 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "1a18242e4afe4eec90f1ed0110e9f66a"
format_stamp: "Formatted at 2025-08-12 01:52:44 on dist-test-slave-3nxt"
I20250812 01:52:44.663398 3905 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250812 01:52:44.669481 3922 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:44.670642 3905 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250812 01:52:44.670984 3905 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "1a18242e4afe4eec90f1ed0110e9f66a"
format_stamp: "Formatted at 2025-08-12 01:52:44 on dist-test-slave-3nxt"
I20250812 01:52:44.671325 3905 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:44.726090 3905 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:44.727527 3905 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:44.727965 3905 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:44.730682 3905 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:44.734936 3905 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:44.735153 3905 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:44.735443 3905 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:44.735615 3905 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:44.901558 3905 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:36207
I20250812 01:52:44.901670 4034 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:36207 every 8 connection(s)
I20250812 01:52:44.904273 3905 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:52:44.911782 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 3905
I20250812 01:52:44.912410 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:52:44.920284 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:40875
--builtin_ntp_servers=127.2.74.84:39081
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:52:44.933020 4035 heartbeater.cc:344] Connected to a master server at 127.2.74.126:40875
I20250812 01:52:44.933552 4035 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:44.934867 4035 heartbeater.cc:507] Master 127.2.74.126:40875 requested a full tablet report, sending...
I20250812 01:52:44.938092 3846 ts_manager.cc:194] Registered new tserver with Master: 1a18242e4afe4eec90f1ed0110e9f66a (127.2.74.65:36207)
I20250812 01:52:44.941375 3846 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:34179
W20250812 01:52:45.231381 4039 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:45.231858 4039 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:45.232293 4039 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:45.264465 4039 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:45.265326 4039 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:52:45.301137 4039 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:39081
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:40875
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:45.302424 4039 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:45.303973 4039 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:45.316821 4045 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:45.945950 4035 heartbeater.cc:499] Master 127.2.74.126:40875 was elected leader, sending a full tablet report...
W20250812 01:52:45.318677 4046 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:46.505331 4048 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:46.507718 4047 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1186 milliseconds
I20250812 01:52:46.507825 4039 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:52:46.509119 4039 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:46.511277 4039 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:46.512655 4039 hybrid_clock.cc:648] HybridClock initialized: now 1754963566512620 us; error 63 us; skew 500 ppm
I20250812 01:52:46.513432 4039 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:46.520033 4039 webserver.cc:489] Webserver started at http://127.2.74.66:44167/ using document root <none> and password file <none>
I20250812 01:52:46.521006 4039 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:46.521250 4039 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:46.521881 4039 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:46.526506 4039 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "ffd3eefe490748ffb5efba9e21abc645"
format_stamp: "Formatted at 2025-08-12 01:52:46 on dist-test-slave-3nxt"
I20250812 01:52:46.527586 4039 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "ffd3eefe490748ffb5efba9e21abc645"
format_stamp: "Formatted at 2025-08-12 01:52:46 on dist-test-slave-3nxt"
I20250812 01:52:46.534916 4039 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.004s
I20250812 01:52:46.540633 4055 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:46.541688 4039 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250812 01:52:46.542004 4039 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "ffd3eefe490748ffb5efba9e21abc645"
format_stamp: "Formatted at 2025-08-12 01:52:46 on dist-test-slave-3nxt"
I20250812 01:52:46.542337 4039 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:46.616060 4039 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:46.617633 4039 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:46.618072 4039 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:46.620550 4039 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:46.624686 4039 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:46.624909 4039 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:46.625161 4039 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:46.625321 4039 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:46.761344 4039 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:33797
I20250812 01:52:46.761452 4167 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:33797 every 8 connection(s)
I20250812 01:52:46.763863 4039 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:52:46.770561 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4039
I20250812 01:52:46.770967 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250812 01:52:46.777195 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:40875
--builtin_ntp_servers=127.2.74.84:39081
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:52:46.785915 4168 heartbeater.cc:344] Connected to a master server at 127.2.74.126:40875
I20250812 01:52:46.786470 4168 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:46.787819 4168 heartbeater.cc:507] Master 127.2.74.126:40875 requested a full tablet report, sending...
I20250812 01:52:46.790647 3846 ts_manager.cc:194] Registered new tserver with Master: ffd3eefe490748ffb5efba9e21abc645 (127.2.74.66:33797)
I20250812 01:52:46.791941 3846 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:57773
W20250812 01:52:47.091370 4172 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:47.091882 4172 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:52:47.092389 4172 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:47.123603 4172 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:52:47.124884 4172 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:52:47.159238 4172 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:39081
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:40875
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:52:47.160614 4172 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:52:47.162164 4172 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:52:47.173988 4178 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:47.796020 4168 heartbeater.cc:499] Master 127.2.74.126:40875 was elected leader, sending a full tablet report...
W20250812 01:52:47.174696 4179 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:52:48.578163 4177 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4172
W20250812 01:52:48.771514 4172 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.598s user 0.502s sys 1.070s
W20250812 01:52:48.772554 4180 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1597 milliseconds
W20250812 01:52:48.773181 4172 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.600s user 0.502s sys 1.071s
I20250812 01:52:48.774058 4172 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250812 01:52:48.774070 4181 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:52:48.777957 4172 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:52:48.781162 4172 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:52:48.782809 4172 hybrid_clock.cc:648] HybridClock initialized: now 1754963568782744 us; error 73 us; skew 500 ppm
I20250812 01:52:48.784189 4172 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:52:48.792065 4172 webserver.cc:489] Webserver started at http://127.2.74.67:35165/ using document root <none> and password file <none>
I20250812 01:52:48.793139 4172 fs_manager.cc:362] Metadata directory not provided
I20250812 01:52:48.793380 4172 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:52:48.793895 4172 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:52:48.798816 4172 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "f3880c11ff0c4042821c5987ae652c18"
format_stamp: "Formatted at 2025-08-12 01:52:48 on dist-test-slave-3nxt"
I20250812 01:52:48.800081 4172 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "f3880c11ff0c4042821c5987ae652c18"
format_stamp: "Formatted at 2025-08-12 01:52:48 on dist-test-slave-3nxt"
I20250812 01:52:48.808781 4172 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.006s sys 0.002s
I20250812 01:52:48.814623 4188 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:48.815730 4172 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:52:48.816066 4172 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "f3880c11ff0c4042821c5987ae652c18"
format_stamp: "Formatted at 2025-08-12 01:52:48 on dist-test-slave-3nxt"
I20250812 01:52:48.816387 4172 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:52:48.865571 4172 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:52:48.867062 4172 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:52:48.867486 4172 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:52:48.869992 4172 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:52:48.874080 4172 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:52:48.874284 4172 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:48.874568 4172 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:52:48.874725 4172 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:52:49.016530 4172 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:38259
I20250812 01:52:49.016685 4300 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:38259 every 8 connection(s)
I20250812 01:52:49.019198 4172 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:52:49.027992 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4172
I20250812 01:52:49.028496 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250812 01:52:49.042047 4301 heartbeater.cc:344] Connected to a master server at 127.2.74.126:40875
I20250812 01:52:49.042459 4301 heartbeater.cc:461] Registering TS with master...
I20250812 01:52:49.043440 4301 heartbeater.cc:507] Master 127.2.74.126:40875 requested a full tablet report, sending...
I20250812 01:52:49.045583 3846 ts_manager.cc:194] Registered new tserver with Master: f3880c11ff0c4042821c5987ae652c18 (127.2.74.67:38259)
I20250812 01:52:49.046957 3846 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:48421
I20250812 01:52:49.049682 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:52:49.085419 3846 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41092:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250812 01:52:49.104360 3846 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250812 01:52:49.165722 3970 tablet_service.cc:1468] Processing CreateTablet for tablet f1581a4c34ea4f59bc0dc90745234b95 (DEFAULT_TABLE table=TestTable [id=e54950b072094c2bade5306cf03a8dde]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:49.167068 4103 tablet_service.cc:1468] Processing CreateTablet for tablet f1581a4c34ea4f59bc0dc90745234b95 (DEFAULT_TABLE table=TestTable [id=e54950b072094c2bade5306cf03a8dde]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:49.167639 3970 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f1581a4c34ea4f59bc0dc90745234b95. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:49.168555 4103 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f1581a4c34ea4f59bc0dc90745234b95. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:49.169265 4236 tablet_service.cc:1468] Processing CreateTablet for tablet f1581a4c34ea4f59bc0dc90745234b95 (DEFAULT_TABLE table=TestTable [id=e54950b072094c2bade5306cf03a8dde]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:52:49.171274 4236 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f1581a4c34ea4f59bc0dc90745234b95. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:49.190841 4320 tablet_bootstrap.cc:492] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645: Bootstrap starting.
I20250812 01:52:49.202556 4320 tablet_bootstrap.cc:654] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:49.202801 4321 tablet_bootstrap.cc:492] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18: Bootstrap starting.
I20250812 01:52:49.205924 4320 log.cc:826] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:49.213549 4321 tablet_bootstrap.cc:654] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:49.214166 4322 tablet_bootstrap.cc:492] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: Bootstrap starting.
I20250812 01:52:49.216382 4321 log.cc:826] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:49.217792 4320 tablet_bootstrap.cc:492] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645: No bootstrap required, opened a new log
I20250812 01:52:49.218487 4320 ts_tablet_manager.cc:1397] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645: Time spent bootstrapping tablet: real 0.028s user 0.014s sys 0.004s
I20250812 01:52:49.222155 4322 tablet_bootstrap.cc:654] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:49.222623 4321 tablet_bootstrap.cc:492] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18: No bootstrap required, opened a new log
I20250812 01:52:49.223217 4321 ts_tablet_manager.cc:1397] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18: Time spent bootstrapping tablet: real 0.022s user 0.008s sys 0.009s
I20250812 01:52:49.224787 4322 log.cc:826] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: Log is configured to *not* fsync() on all Append() calls
I20250812 01:52:49.235450 4322 tablet_bootstrap.cc:492] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: No bootstrap required, opened a new log
I20250812 01:52:49.236178 4322 ts_tablet_manager.cc:1397] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: Time spent bootstrapping tablet: real 0.023s user 0.013s sys 0.004s
I20250812 01:52:49.247481 4320 raft_consensus.cc:357] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.248559 4320 raft_consensus.cc:383] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:49.248932 4320 raft_consensus.cc:738] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ffd3eefe490748ffb5efba9e21abc645, State: Initialized, Role: FOLLOWER
I20250812 01:52:49.249895 4320 consensus_queue.cc:260] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.273125 4320 ts_tablet_manager.cc:1428] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645: Time spent starting tablet: real 0.054s user 0.037s sys 0.016s
I20250812 01:52:49.272656 4322 raft_consensus.cc:357] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.273666 4322 raft_consensus.cc:383] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:49.274026 4322 raft_consensus.cc:738] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1a18242e4afe4eec90f1ed0110e9f66a, State: Initialized, Role: FOLLOWER
I20250812 01:52:49.278294 4322 consensus_queue.cc:260] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.280453 4321 raft_consensus.cc:357] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.281664 4321 raft_consensus.cc:383] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:49.282042 4321 raft_consensus.cc:738] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f3880c11ff0c4042821c5987ae652c18, State: Initialized, Role: FOLLOWER
I20250812 01:52:49.283172 4321 consensus_queue.cc:260] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.287084 4301 heartbeater.cc:499] Master 127.2.74.126:40875 was elected leader, sending a full tablet report...
I20250812 01:52:49.295529 4329 raft_consensus.cc:491] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:52:49.296249 4329 raft_consensus.cc:513] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.302944 4321 ts_tablet_manager.cc:1428] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18: Time spent starting tablet: real 0.079s user 0.023s sys 0.009s
I20250812 01:52:49.307756 4329 leader_election.cc:290] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1a18242e4afe4eec90f1ed0110e9f66a (127.2.74.65:36207), ffd3eefe490748ffb5efba9e21abc645 (127.2.74.66:33797)
I20250812 01:52:49.319751 3990 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f1581a4c34ea4f59bc0dc90745234b95" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" is_pre_election: true
I20250812 01:52:49.320695 3990 raft_consensus.cc:2466] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 0.
I20250812 01:52:49.321955 4322 ts_tablet_manager.cc:1428] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: Time spent starting tablet: real 0.085s user 0.041s sys 0.028s
I20250812 01:52:49.322364 4189 leader_election.cc:304] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1a18242e4afe4eec90f1ed0110e9f66a, f3880c11ff0c4042821c5987ae652c18; no voters:
I20250812 01:52:49.323843 4329 raft_consensus.cc:2802] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:52:49.324290 4329 raft_consensus.cc:491] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:52:49.324802 4329 raft_consensus.cc:3058] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:49.330399 4123 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f1581a4c34ea4f59bc0dc90745234b95" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ffd3eefe490748ffb5efba9e21abc645" is_pre_election: true
I20250812 01:52:49.331283 4123 raft_consensus.cc:2466] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 0.
I20250812 01:52:49.333319 4329 raft_consensus.cc:513] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.334721 4329 leader_election.cc:290] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 election: Requested vote from peers 1a18242e4afe4eec90f1ed0110e9f66a (127.2.74.65:36207), ffd3eefe490748ffb5efba9e21abc645 (127.2.74.66:33797)
I20250812 01:52:49.335604 3990 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f1581a4c34ea4f59bc0dc90745234b95" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1a18242e4afe4eec90f1ed0110e9f66a"
I20250812 01:52:49.335683 4123 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f1581a4c34ea4f59bc0dc90745234b95" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ffd3eefe490748ffb5efba9e21abc645"
I20250812 01:52:49.336136 3990 raft_consensus.cc:3058] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:49.336225 4123 raft_consensus.cc:3058] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:49.341414 4123 raft_consensus.cc:2466] T f1581a4c34ea4f59bc0dc90745234b95 P ffd3eefe490748ffb5efba9e21abc645 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 1.
I20250812 01:52:49.342481 4192 leader_election.cc:304] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: f3880c11ff0c4042821c5987ae652c18, ffd3eefe490748ffb5efba9e21abc645; no voters:
I20250812 01:52:49.342890 3990 raft_consensus.cc:2466] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 1.
I20250812 01:52:49.343138 4329 raft_consensus.cc:2802] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:49.343887 4329 raft_consensus.cc:695] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [term 1 LEADER]: Becoming Leader. State: Replica: f3880c11ff0c4042821c5987ae652c18, State: Running, Role: LEADER
I20250812 01:52:49.345106 4329 consensus_queue.cc:237] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } }
I20250812 01:52:49.356730 3846 catalog_manager.cc:5582] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 reported cstate change: term changed from 0 to 1, leader changed from <none> to f3880c11ff0c4042821c5987ae652c18 (127.2.74.67). New cstate: current_term: 1 leader_uuid: "f3880c11ff0c4042821c5987ae652c18" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } health_report { overall_health: UNKNOWN } } }
I20250812 01:52:49.403118 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:52:49.406600 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1a18242e4afe4eec90f1ed0110e9f66a to finish bootstrapping
W20250812 01:52:49.422652 4036 tablet.cc:2378] T f1581a4c34ea4f59bc0dc90745234b95 P 1a18242e4afe4eec90f1ed0110e9f66a: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:49.422792 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ffd3eefe490748ffb5efba9e21abc645 to finish bootstrapping
I20250812 01:52:49.433483 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f3880c11ff0c4042821c5987ae652c18 to finish bootstrapping
I20250812 01:52:49.447455 3846 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41092:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250812 01:52:49.449043 3846 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250812 01:52:49.466176 3970 tablet_service.cc:1468] Processing CreateTablet for tablet e88ef57321ee4ffd911bdfd07d11b41c (DEFAULT_TABLE table=TestAnotherTable [id=2d7011afb17d4b128e43f5b402cb006e]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250812 01:52:49.466781 4103 tablet_service.cc:1468] Processing CreateTablet for tablet e88ef57321ee4ffd911bdfd07d11b41c (DEFAULT_TABLE table=TestAnotherTable [id=2d7011afb17d4b128e43f5b402cb006e]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250812 01:52:49.467341 3970 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet e88ef57321ee4ffd911bdfd07d11b41c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:49.467872 4103 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet e88ef57321ee4ffd911bdfd07d11b41c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:49.467586 4236 tablet_service.cc:1468] Processing CreateTablet for tablet e88ef57321ee4ffd911bdfd07d11b41c (DEFAULT_TABLE table=TestAnotherTable [id=2d7011afb17d4b128e43f5b402cb006e]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250812 01:52:49.468715 4236 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet e88ef57321ee4ffd911bdfd07d11b41c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:52:49.477320 4322 tablet_bootstrap.cc:492] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a: Bootstrap starting.
I20250812 01:52:49.482795 4320 tablet_bootstrap.cc:492] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645: Bootstrap starting.
I20250812 01:52:49.483904 4321 tablet_bootstrap.cc:492] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18: Bootstrap starting.
I20250812 01:52:49.484341 4322 tablet_bootstrap.cc:654] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:49.489243 4320 tablet_bootstrap.cc:654] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:49.490207 4321 tablet_bootstrap.cc:654] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18: Neither blocks nor log segments found. Creating new log.
I20250812 01:52:49.496661 4320 tablet_bootstrap.cc:492] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645: No bootstrap required, opened a new log
I20250812 01:52:49.496963 4322 tablet_bootstrap.cc:492] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a: No bootstrap required, opened a new log
I20250812 01:52:49.497133 4320 ts_tablet_manager.cc:1397] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645: Time spent bootstrapping tablet: real 0.015s user 0.010s sys 0.003s
I20250812 01:52:49.497330 4322 ts_tablet_manager.cc:1397] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a: Time spent bootstrapping tablet: real 0.020s user 0.006s sys 0.010s
I20250812 01:52:49.497447 4321 tablet_bootstrap.cc:492] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18: No bootstrap required, opened a new log
I20250812 01:52:49.497921 4321 ts_tablet_manager.cc:1397] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18: Time spent bootstrapping tablet: real 0.014s user 0.004s sys 0.008s
I20250812 01:52:49.499378 4322 raft_consensus.cc:357] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.499907 4322 raft_consensus.cc:383] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:49.500138 4322 raft_consensus.cc:738] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1a18242e4afe4eec90f1ed0110e9f66a, State: Initialized, Role: FOLLOWER
I20250812 01:52:49.499866 4320 raft_consensus.cc:357] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.500663 4320 raft_consensus.cc:383] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:49.500981 4320 raft_consensus.cc:738] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ffd3eefe490748ffb5efba9e21abc645, State: Initialized, Role: FOLLOWER
I20250812 01:52:49.500764 4322 consensus_queue.cc:260] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.500751 4321 raft_consensus.cc:357] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.501519 4321 raft_consensus.cc:383] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:52:49.501844 4321 raft_consensus.cc:738] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f3880c11ff0c4042821c5987ae652c18, State: Initialized, Role: FOLLOWER
I20250812 01:52:49.501940 4320 consensus_queue.cc:260] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.502889 4322 ts_tablet_manager.cc:1428] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a: Time spent starting tablet: real 0.005s user 0.000s sys 0.004s
I20250812 01:52:49.502573 4321 consensus_queue.cc:260] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.504824 4321 ts_tablet_manager.cc:1428] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18: Time spent starting tablet: real 0.007s user 0.005s sys 0.003s
I20250812 01:52:49.507380 4320 ts_tablet_manager.cc:1428] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645: Time spent starting tablet: real 0.010s user 0.007s sys 0.000s
W20250812 01:52:49.521198 4169 tablet.cc:2378] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250812 01:52:49.524315 4302 tablet.cc:2378] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:52:49.812454 4329 raft_consensus.cc:491] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:52:49.812891 4329 raft_consensus.cc:513] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.814266 4329 leader_election.cc:290] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1a18242e4afe4eec90f1ed0110e9f66a (127.2.74.65:36207), ffd3eefe490748ffb5efba9e21abc645 (127.2.74.66:33797)
I20250812 01:52:49.815158 3990 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "e88ef57321ee4ffd911bdfd07d11b41c" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" is_pre_election: true
I20250812 01:52:49.815335 4123 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "e88ef57321ee4ffd911bdfd07d11b41c" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ffd3eefe490748ffb5efba9e21abc645" is_pre_election: true
I20250812 01:52:49.815819 3990 raft_consensus.cc:2466] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 0.
I20250812 01:52:49.815939 4123 raft_consensus.cc:2466] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 0.
I20250812 01:52:49.816886 4189 leader_election.cc:304] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1a18242e4afe4eec90f1ed0110e9f66a, f3880c11ff0c4042821c5987ae652c18; no voters:
I20250812 01:52:49.817498 4329 raft_consensus.cc:2802] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:52:49.817782 4329 raft_consensus.cc:491] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:52:49.818027 4329 raft_consensus.cc:3058] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:49.822299 4329 raft_consensus.cc:513] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.823683 4329 leader_election.cc:290] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 election: Requested vote from peers 1a18242e4afe4eec90f1ed0110e9f66a (127.2.74.65:36207), ffd3eefe490748ffb5efba9e21abc645 (127.2.74.66:33797)
I20250812 01:52:49.824574 3990 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "e88ef57321ee4ffd911bdfd07d11b41c" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1a18242e4afe4eec90f1ed0110e9f66a"
I20250812 01:52:49.824736 4123 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "e88ef57321ee4ffd911bdfd07d11b41c" candidate_uuid: "f3880c11ff0c4042821c5987ae652c18" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ffd3eefe490748ffb5efba9e21abc645"
I20250812 01:52:49.825155 3990 raft_consensus.cc:3058] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:49.825260 4123 raft_consensus.cc:3058] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:52:49.829497 4123 raft_consensus.cc:2466] T e88ef57321ee4ffd911bdfd07d11b41c P ffd3eefe490748ffb5efba9e21abc645 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 1.
I20250812 01:52:49.829991 3990 raft_consensus.cc:2466] T e88ef57321ee4ffd911bdfd07d11b41c P 1a18242e4afe4eec90f1ed0110e9f66a [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3880c11ff0c4042821c5987ae652c18 in term 1.
I20250812 01:52:49.830492 4192 leader_election.cc:304] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: f3880c11ff0c4042821c5987ae652c18, ffd3eefe490748ffb5efba9e21abc645; no voters:
I20250812 01:52:49.831214 4329 raft_consensus.cc:2802] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:52:49.831616 4329 raft_consensus.cc:695] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [term 1 LEADER]: Becoming Leader. State: Replica: f3880c11ff0c4042821c5987ae652c18, State: Running, Role: LEADER
I20250812 01:52:49.832316 4329 consensus_queue.cc:237] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } }
I20250812 01:52:49.838800 3843 catalog_manager.cc:5582] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 reported cstate change: term changed from 0 to 1, leader changed from <none> to f3880c11ff0c4042821c5987ae652c18 (127.2.74.67). New cstate: current_term: 1 leader_uuid: "f3880c11ff0c4042821c5987ae652c18" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f3880c11ff0c4042821c5987ae652c18" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 38259 } health_report { overall_health: HEALTHY } } }
I20250812 01:52:49.840310 4329 consensus_queue.cc:1035] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:52:49.845717 4329 consensus_queue.cc:1035] T f1581a4c34ea4f59bc0dc90745234b95 P f3880c11ff0c4042821c5987ae652c18 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250812 01:52:50.207598 4342 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:50.208247 4342 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:50.240706 4342 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250812 01:52:50.267295 4341 consensus_queue.cc:1035] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1a18242e4afe4eec90f1ed0110e9f66a" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 36207 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250812 01:52:50.284852 4328 consensus_queue.cc:1035] T e88ef57321ee4ffd911bdfd07d11b41c P f3880c11ff0c4042821c5987ae652c18 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ffd3eefe490748ffb5efba9e21abc645" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 33797 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250812 01:52:51.609421 4342 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.315s user 0.000s sys 0.004s
W20250812 01:52:51.609710 4342 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.316s user 0.000s sys 0.004s
W20250812 01:52:53.006160 4359 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:53.006721 4359 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:53.038954 4359 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250812 01:52:54.342669 4359 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.265s user 0.491s sys 0.771s
W20250812 01:52:54.343102 4359 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.265s user 0.491s sys 0.771s
W20250812 01:52:55.720773 4376 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:55.721393 4376 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:55.756989 4376 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250812 01:52:57.034013 4376 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.234s user 0.494s sys 0.738s
W20250812 01:52:57.034401 4376 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.234s user 0.494s sys 0.738s
W20250812 01:52:58.434688 4390 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:52:58.435258 4390 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:52:58.467984 4390 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250812 01:52:59.739846 4390 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.233s user 0.446s sys 0.784s
W20250812 01:52:59.740140 4390 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.234s user 0.446s sys 0.784s
I20250812 01:53:00.820386 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3905
I20250812 01:53:00.848619 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4039
I20250812 01:53:00.875300 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4172
I20250812 01:53:00.902595 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 3813
2025-08-12T01:53:00Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (20502 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250812 01:53:00.960847 2345 test_util.cc:276] Using random seed: 1289109557
I20250812 01:53:00.964941 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:53:00.965108 2345 ts_itest-base.cc:116] --------------
I20250812 01:53:00.965274 2345 ts_itest-base.cc:117] 3 tablet servers
I20250812 01:53:00.965417 2345 ts_itest-base.cc:118] 3 replicas per TS
I20250812 01:53:00.965552 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:53:00Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:53:00Z Disabled control of system clock
I20250812 01:53:01.002177 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:42727
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:36719
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:42727
--superuser_acl=no-such-user with env {}
W20250812 01:53:01.309605 4411 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:01.310262 4411 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:01.310783 4411 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:01.343258 4411 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:53:01.343613 4411 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:01.343875 4411 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:53:01.344115 4411 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:53:01.380079 4411 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:36719
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:42727
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:42727
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:01.381467 4411 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:01.383078 4411 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:01.393873 4417 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:01.394775 4418 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:01.397791 4420 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:02.569267 4419 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250812 01:53:02.569356 4411 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:02.573040 4411 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:02.576115 4411 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:02.577540 4411 hybrid_clock.cc:648] HybridClock initialized: now 1754963582577498 us; error 60 us; skew 500 ppm
I20250812 01:53:02.578339 4411 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:02.584965 4411 webserver.cc:489] Webserver started at http://127.2.74.126:39377/ using document root <none> and password file <none>
I20250812 01:53:02.585904 4411 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:02.586114 4411 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:02.586585 4411 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:02.591022 4411 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "6f41474c553d4815a6ab2906ff682c44"
format_stamp: "Formatted at 2025-08-12 01:53:02 on dist-test-slave-3nxt"
I20250812 01:53:02.592135 4411 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "6f41474c553d4815a6ab2906ff682c44"
format_stamp: "Formatted at 2025-08-12 01:53:02 on dist-test-slave-3nxt"
I20250812 01:53:02.599647 4411 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.000s
I20250812 01:53:02.605895 4427 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:02.607069 4411 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250812 01:53:02.607436 4411 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "6f41474c553d4815a6ab2906ff682c44"
format_stamp: "Formatted at 2025-08-12 01:53:02 on dist-test-slave-3nxt"
I20250812 01:53:02.607795 4411 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:02.669214 4411 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:02.670722 4411 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:02.671176 4411 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:02.743042 4411 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:42727
I20250812 01:53:02.743122 4478 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:42727 every 8 connection(s)
I20250812 01:53:02.745783 4411 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:53:02.746311 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4411
I20250812 01:53:02.746692 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:53:02.752569 4479 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:02.778008 4479 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44: Bootstrap starting.
I20250812 01:53:02.783839 4479 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:02.785738 4479 log.cc:826] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:02.790284 4479 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44: No bootstrap required, opened a new log
I20250812 01:53:02.807755 4479 raft_consensus.cc:357] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f41474c553d4815a6ab2906ff682c44" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42727 } }
I20250812 01:53:02.808615 4479 raft_consensus.cc:383] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:02.808895 4479 raft_consensus.cc:738] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6f41474c553d4815a6ab2906ff682c44, State: Initialized, Role: FOLLOWER
I20250812 01:53:02.809646 4479 consensus_queue.cc:260] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f41474c553d4815a6ab2906ff682c44" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42727 } }
I20250812 01:53:02.810177 4479 raft_consensus.cc:397] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:02.810433 4479 raft_consensus.cc:491] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:02.810745 4479 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:02.814898 4479 raft_consensus.cc:513] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f41474c553d4815a6ab2906ff682c44" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42727 } }
I20250812 01:53:02.815663 4479 leader_election.cc:304] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6f41474c553d4815a6ab2906ff682c44; no voters:
I20250812 01:53:02.817361 4479 leader_election.cc:290] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:53:02.818123 4484 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:53:02.820343 4484 raft_consensus.cc:695] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [term 1 LEADER]: Becoming Leader. State: Replica: 6f41474c553d4815a6ab2906ff682c44, State: Running, Role: LEADER
I20250812 01:53:02.821597 4479 sys_catalog.cc:564] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:53:02.821295 4484 consensus_queue.cc:237] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f41474c553d4815a6ab2906ff682c44" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42727 } }
I20250812 01:53:02.833197 4486 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 6f41474c553d4815a6ab2906ff682c44. Latest consensus state: current_term: 1 leader_uuid: "6f41474c553d4815a6ab2906ff682c44" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f41474c553d4815a6ab2906ff682c44" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42727 } } }
I20250812 01:53:02.834137 4486 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:02.838119 4485 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "6f41474c553d4815a6ab2906ff682c44" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f41474c553d4815a6ab2906ff682c44" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 42727 } } }
I20250812 01:53:02.839017 4485 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:02.841862 4493 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:53:02.855579 4493 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:53:02.871393 4493 catalog_manager.cc:1349] Generated new cluster ID: 947ec03329784244999a9060d8a10753
I20250812 01:53:02.871752 4493 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:53:02.886783 4493 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:53:02.888240 4493 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:53:02.900249 4493 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 6f41474c553d4815a6ab2906ff682c44: Generated new TSK 0
I20250812 01:53:02.901194 4493 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:53:02.921826 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42727
--builtin_ntp_servers=127.2.74.84:36719
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250812 01:53:03.238160 4503 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:03.238701 4503 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:03.239216 4503 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:03.272742 4503 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:03.273599 4503 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:53:03.309901 4503 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:36719
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42727
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:03.311235 4503 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:03.312884 4503 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:03.326797 4509 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:04.728112 4508 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4503
W20250812 01:53:05.106230 4508 kernel_stack_watchdog.cc:198] Thread 4503 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 397ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:53:03.326952 4510 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:05.112783 4503 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.785s user 0.002s sys 0.006s
W20250812 01:53:05.113333 4503 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.786s user 0.002s sys 0.006s
W20250812 01:53:05.113636 4511 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1787 milliseconds
W20250812 01:53:05.114604 4513 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:05.114681 4503 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:05.115919 4503 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:05.118435 4503 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:05.119889 4503 hybrid_clock.cc:648] HybridClock initialized: now 1754963585119820 us; error 76 us; skew 500 ppm
I20250812 01:53:05.120716 4503 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:05.127713 4503 webserver.cc:489] Webserver started at http://127.2.74.65:40149/ using document root <none> and password file <none>
I20250812 01:53:05.128693 4503 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:05.128916 4503 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:05.129381 4503 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:05.133769 4503 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb"
format_stamp: "Formatted at 2025-08-12 01:53:05 on dist-test-slave-3nxt"
I20250812 01:53:05.134902 4503 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb"
format_stamp: "Formatted at 2025-08-12 01:53:05 on dist-test-slave-3nxt"
I20250812 01:53:05.142568 4503 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250812 01:53:05.149104 4520 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:05.150362 4503 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.003s sys 0.001s
I20250812 01:53:05.150717 4503 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb"
format_stamp: "Formatted at 2025-08-12 01:53:05 on dist-test-slave-3nxt"
I20250812 01:53:05.151050 4503 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:05.217453 4503 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:05.219408 4503 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:05.219960 4503 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:05.223285 4503 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:05.229054 4503 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:53:05.229387 4503 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:05.229704 4503 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:53:05.229914 4503 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:05.393561 4503 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:34869
I20250812 01:53:05.393662 4632 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:34869 every 8 connection(s)
I20250812 01:53:05.396214 4503 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:53:05.401901 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4503
I20250812 01:53:05.402755 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:53:05.414296 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42727
--builtin_ntp_servers=127.2.74.84:36719
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:05.434264 4633 heartbeater.cc:344] Connected to a master server at 127.2.74.126:42727
I20250812 01:53:05.434878 4633 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:05.436424 4633 heartbeater.cc:507] Master 127.2.74.126:42727 requested a full tablet report, sending...
I20250812 01:53:05.439908 4444 ts_manager.cc:194] Registered new tserver with Master: f1e40fe7728b4c34bc77e5a3789cb3bb (127.2.74.65:34869)
I20250812 01:53:05.442703 4444 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:54323
W20250812 01:53:05.730070 4637 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:05.730623 4637 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:05.731209 4637 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:05.763605 4637 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:05.764484 4637 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:53:05.802465 4637 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:36719
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42727
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:05.803822 4637 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:05.805487 4637 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:05.818464 4643 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:06.447016 4633 heartbeater.cc:499] Master 127.2.74.126:42727 was elected leader, sending a full tablet report...
W20250812 01:53:07.221549 4642 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4637
W20250812 01:53:07.583230 4637 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.765s user 0.590s sys 1.175s
W20250812 01:53:07.583696 4637 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.766s user 0.590s sys 1.175s
W20250812 01:53:05.819011 4644 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:07.585628 4646 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:07.589061 4645 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1766 milliseconds
I20250812 01:53:07.589130 4637 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:07.590270 4637 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:07.592305 4637 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:07.593652 4637 hybrid_clock.cc:648] HybridClock initialized: now 1754963587593602 us; error 59 us; skew 500 ppm
I20250812 01:53:07.594415 4637 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:07.600575 4637 webserver.cc:489] Webserver started at http://127.2.74.66:42893/ using document root <none> and password file <none>
I20250812 01:53:07.601611 4637 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:07.601847 4637 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:07.602293 4637 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:07.606767 4637 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "d3da723fa0db41a4b67949bbf33cec0a"
format_stamp: "Formatted at 2025-08-12 01:53:07 on dist-test-slave-3nxt"
I20250812 01:53:07.607923 4637 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "d3da723fa0db41a4b67949bbf33cec0a"
format_stamp: "Formatted at 2025-08-12 01:53:07 on dist-test-slave-3nxt"
I20250812 01:53:07.615151 4637 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.001s
I20250812 01:53:07.620883 4653 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:07.621970 4637 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250812 01:53:07.622314 4637 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d3da723fa0db41a4b67949bbf33cec0a"
format_stamp: "Formatted at 2025-08-12 01:53:07 on dist-test-slave-3nxt"
I20250812 01:53:07.622660 4637 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:07.670037 4637 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:07.671500 4637 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:07.671952 4637 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:07.674438 4637 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:07.678520 4637 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:53:07.678735 4637 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:07.678998 4637 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:53:07.679167 4637 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:07.818733 4637 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:38967
I20250812 01:53:07.818858 4765 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:38967 every 8 connection(s)
I20250812 01:53:07.821350 4637 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:53:07.830395 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4637
I20250812 01:53:07.830931 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250812 01:53:07.837610 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42727
--builtin_ntp_servers=127.2.74.84:36719
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:07.848084 4766 heartbeater.cc:344] Connected to a master server at 127.2.74.126:42727
I20250812 01:53:07.848531 4766 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:07.849587 4766 heartbeater.cc:507] Master 127.2.74.126:42727 requested a full tablet report, sending...
I20250812 01:53:07.851897 4444 ts_manager.cc:194] Registered new tserver with Master: d3da723fa0db41a4b67949bbf33cec0a (127.2.74.66:38967)
I20250812 01:53:07.853768 4444 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:53789
W20250812 01:53:08.139400 4770 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:08.139873 4770 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:08.140336 4770 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:08.172472 4770 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:08.173327 4770 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:53:08.208890 4770 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:36719
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:42727
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:08.210191 4770 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:08.211723 4770 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:08.223985 4776 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:08.857259 4766 heartbeater.cc:499] Master 127.2.74.126:42727 was elected leader, sending a full tablet report...
W20250812 01:53:08.224551 4777 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:09.586653 4778 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1357 milliseconds
W20250812 01:53:09.586926 4770 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.363s user 0.442s sys 0.906s
W20250812 01:53:09.587297 4770 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.363s user 0.442s sys 0.906s
W20250812 01:53:09.587934 4779 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:09.587994 4770 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:09.589304 4770 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:09.591637 4770 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:09.593009 4770 hybrid_clock.cc:648] HybridClock initialized: now 1754963589592961 us; error 64 us; skew 500 ppm
I20250812 01:53:09.593817 4770 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:09.601636 4770 webserver.cc:489] Webserver started at http://127.2.74.67:37109/ using document root <none> and password file <none>
I20250812 01:53:09.602768 4770 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:09.603024 4770 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:09.603518 4770 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:09.608291 4770 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "8aec00d61fab479d92e3abf9e96e2878"
format_stamp: "Formatted at 2025-08-12 01:53:09 on dist-test-slave-3nxt"
I20250812 01:53:09.609525 4770 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "8aec00d61fab479d92e3abf9e96e2878"
format_stamp: "Formatted at 2025-08-12 01:53:09 on dist-test-slave-3nxt"
I20250812 01:53:09.618189 4770 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.007s sys 0.001s
I20250812 01:53:09.624501 4786 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:09.625788 4770 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250812 01:53:09.626145 4770 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "8aec00d61fab479d92e3abf9e96e2878"
format_stamp: "Formatted at 2025-08-12 01:53:09 on dist-test-slave-3nxt"
I20250812 01:53:09.626498 4770 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:09.701761 4770 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:09.703258 4770 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:09.703701 4770 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:09.706282 4770 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:09.710598 4770 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:53:09.710836 4770 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:09.711108 4770 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:53:09.711292 4770 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:09.856003 4770 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:46441
I20250812 01:53:09.856096 4898 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:46441 every 8 connection(s)
I20250812 01:53:09.858778 4770 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:53:09.868125 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4770
I20250812 01:53:09.868670 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250812 01:53:09.883358 4899 heartbeater.cc:344] Connected to a master server at 127.2.74.126:42727
I20250812 01:53:09.883803 4899 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:09.884896 4899 heartbeater.cc:507] Master 127.2.74.126:42727 requested a full tablet report, sending...
I20250812 01:53:09.887249 4444 ts_manager.cc:194] Registered new tserver with Master: 8aec00d61fab479d92e3abf9e96e2878 (127.2.74.67:46441)
I20250812 01:53:09.888716 4444 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:40571
I20250812 01:53:09.890110 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:09.924919 4443 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41418:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250812 01:53:09.944108 4443 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250812 01:53:09.994352 4701 tablet_service.cc:1468] Processing CreateTablet for tablet af8da46f384f482ab9d122c9dc9395cb (DEFAULT_TABLE table=TestTable [id=fa3f0e36d8f34bda85f8b32c68a87c04]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:53:09.994765 4568 tablet_service.cc:1468] Processing CreateTablet for tablet af8da46f384f482ab9d122c9dc9395cb (DEFAULT_TABLE table=TestTable [id=fa3f0e36d8f34bda85f8b32c68a87c04]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:53:09.994673 4834 tablet_service.cc:1468] Processing CreateTablet for tablet af8da46f384f482ab9d122c9dc9395cb (DEFAULT_TABLE table=TestTable [id=fa3f0e36d8f34bda85f8b32c68a87c04]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:53:09.996102 4701 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet af8da46f384f482ab9d122c9dc9395cb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:09.996569 4834 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet af8da46f384f482ab9d122c9dc9395cb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:09.996567 4568 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet af8da46f384f482ab9d122c9dc9395cb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:10.018234 4918 tablet_bootstrap.cc:492] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: Bootstrap starting.
I20250812 01:53:10.025542 4919 tablet_bootstrap.cc:492] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: Bootstrap starting.
I20250812 01:53:10.026305 4918 tablet_bootstrap.cc:654] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:10.028915 4918 log.cc:826] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:10.030050 4920 tablet_bootstrap.cc:492] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: Bootstrap starting.
I20250812 01:53:10.034222 4919 tablet_bootstrap.cc:654] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:10.035596 4918 tablet_bootstrap.cc:492] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: No bootstrap required, opened a new log
I20250812 01:53:10.036120 4918 ts_tablet_manager.cc:1397] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: Time spent bootstrapping tablet: real 0.018s user 0.005s sys 0.009s
I20250812 01:53:10.036630 4919 log.cc:826] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:10.036906 4920 tablet_bootstrap.cc:654] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:10.038972 4920 log.cc:826] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:10.042523 4919 tablet_bootstrap.cc:492] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: No bootstrap required, opened a new log
I20250812 01:53:10.043080 4919 ts_tablet_manager.cc:1397] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: Time spent bootstrapping tablet: real 0.018s user 0.014s sys 0.000s
I20250812 01:53:10.044294 4920 tablet_bootstrap.cc:492] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: No bootstrap required, opened a new log
I20250812 01:53:10.044757 4920 ts_tablet_manager.cc:1397] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: Time spent bootstrapping tablet: real 0.015s user 0.007s sys 0.008s
I20250812 01:53:10.063526 4920 raft_consensus.cc:357] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.064446 4920 raft_consensus.cc:383] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:10.063979 4918 raft_consensus.cc:357] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.064796 4920 raft_consensus.cc:738] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f1e40fe7728b4c34bc77e5a3789cb3bb, State: Initialized, Role: FOLLOWER
I20250812 01:53:10.064944 4918 raft_consensus.cc:383] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:10.065285 4918 raft_consensus.cc:738] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d3da723fa0db41a4b67949bbf33cec0a, State: Initialized, Role: FOLLOWER
I20250812 01:53:10.065889 4920 consensus_queue.cc:260] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.066197 4918 consensus_queue.cc:260] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.070752 4920 ts_tablet_manager.cc:1428] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: Time spent starting tablet: real 0.026s user 0.021s sys 0.003s
I20250812 01:53:10.071342 4919 raft_consensus.cc:357] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.072429 4919 raft_consensus.cc:383] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:10.072688 4919 raft_consensus.cc:738] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8aec00d61fab479d92e3abf9e96e2878, State: Initialized, Role: FOLLOWER
I20250812 01:53:10.073381 4919 consensus_queue.cc:260] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.076639 4899 heartbeater.cc:499] Master 127.2.74.126:42727 was elected leader, sending a full tablet report...
I20250812 01:53:10.077914 4919 ts_tablet_manager.cc:1428] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: Time spent starting tablet: real 0.035s user 0.028s sys 0.008s
I20250812 01:53:10.077919 4918 ts_tablet_manager.cc:1428] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: Time spent starting tablet: real 0.041s user 0.029s sys 0.009s
W20250812 01:53:10.078886 4767 tablet.cc:2378] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:53:10.093094 4925 raft_consensus.cc:491] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:53:10.093549 4925 raft_consensus.cc:513] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.095800 4925 leader_election.cc:290] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 8aec00d61fab479d92e3abf9e96e2878 (127.2.74.67:46441), f1e40fe7728b4c34bc77e5a3789cb3bb (127.2.74.65:34869)
I20250812 01:53:10.107792 4854 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "af8da46f384f482ab9d122c9dc9395cb" candidate_uuid: "d3da723fa0db41a4b67949bbf33cec0a" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8aec00d61fab479d92e3abf9e96e2878" is_pre_election: true
I20250812 01:53:10.108554 4854 raft_consensus.cc:2466] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d3da723fa0db41a4b67949bbf33cec0a in term 0.
I20250812 01:53:10.109122 4588 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "af8da46f384f482ab9d122c9dc9395cb" candidate_uuid: "d3da723fa0db41a4b67949bbf33cec0a" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" is_pre_election: true
I20250812 01:53:10.109948 4588 raft_consensus.cc:2466] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d3da723fa0db41a4b67949bbf33cec0a in term 0.
I20250812 01:53:10.109947 4654 leader_election.cc:304] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 8aec00d61fab479d92e3abf9e96e2878, d3da723fa0db41a4b67949bbf33cec0a; no voters:
I20250812 01:53:10.110862 4925 raft_consensus.cc:2802] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:53:10.111213 4925 raft_consensus.cc:491] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:53:10.111482 4925 raft_consensus.cc:3058] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 0 FOLLOWER]: Advancing to term 1
W20250812 01:53:10.113651 4900 tablet.cc:2378] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:53:10.116472 4925 raft_consensus.cc:513] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.117830 4925 leader_election.cc:290] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [CANDIDATE]: Term 1 election: Requested vote from peers 8aec00d61fab479d92e3abf9e96e2878 (127.2.74.67:46441), f1e40fe7728b4c34bc77e5a3789cb3bb (127.2.74.65:34869)
I20250812 01:53:10.118680 4854 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "af8da46f384f482ab9d122c9dc9395cb" candidate_uuid: "d3da723fa0db41a4b67949bbf33cec0a" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8aec00d61fab479d92e3abf9e96e2878"
I20250812 01:53:10.118858 4588 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "af8da46f384f482ab9d122c9dc9395cb" candidate_uuid: "d3da723fa0db41a4b67949bbf33cec0a" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb"
I20250812 01:53:10.119123 4854 raft_consensus.cc:3058] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:10.119321 4588 raft_consensus.cc:3058] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:10.123934 4854 raft_consensus.cc:2466] T af8da46f384f482ab9d122c9dc9395cb P 8aec00d61fab479d92e3abf9e96e2878 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d3da723fa0db41a4b67949bbf33cec0a in term 1.
I20250812 01:53:10.124363 4588 raft_consensus.cc:2466] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d3da723fa0db41a4b67949bbf33cec0a in term 1.
I20250812 01:53:10.124974 4654 leader_election.cc:304] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 8aec00d61fab479d92e3abf9e96e2878, d3da723fa0db41a4b67949bbf33cec0a; no voters:
I20250812 01:53:10.125672 4925 raft_consensus.cc:2802] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:53:10.127130 4925 raft_consensus.cc:695] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [term 1 LEADER]: Becoming Leader. State: Replica: d3da723fa0db41a4b67949bbf33cec0a, State: Running, Role: LEADER
I20250812 01:53:10.128007 4925 consensus_queue.cc:237] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } }
I20250812 01:53:10.138857 4443 catalog_manager.cc:5582] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a reported cstate change: term changed from 0 to 1, leader changed from <none> to d3da723fa0db41a4b67949bbf33cec0a (127.2.74.66). New cstate: current_term: 1 leader_uuid: "d3da723fa0db41a4b67949bbf33cec0a" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d3da723fa0db41a4b67949bbf33cec0a" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 38967 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 } health_report { overall_health: UNKNOWN } } }
W20250812 01:53:10.162873 4634 tablet.cc:2378] T af8da46f384f482ab9d122c9dc9395cb P f1e40fe7728b4c34bc77e5a3789cb3bb: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:53:10.190667 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:10.194065 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f1e40fe7728b4c34bc77e5a3789cb3bb to finish bootstrapping
I20250812 01:53:10.206822 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d3da723fa0db41a4b67949bbf33cec0a to finish bootstrapping
I20250812 01:53:10.218055 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 8aec00d61fab479d92e3abf9e96e2878 to finish bootstrapping
I20250812 01:53:10.705899 4942 consensus_queue.cc:1035] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [LEADER]: Connected to new peer: Peer: permanent_uuid: "8aec00d61fab479d92e3abf9e96e2878" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 46441 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250812 01:53:10.740809 4947 consensus_queue.cc:1035] T af8da46f384f482ab9d122c9dc9395cb P d3da723fa0db41a4b67949bbf33cec0a [LEADER]: Connected to new peer: Peer: permanent_uuid: "f1e40fe7728b4c34bc77e5a3789cb3bb" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 34869 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250812 01:53:11.957517 4443 server_base.cc:1129] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:41446
I20250812 01:53:12.993088 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4503
I20250812 01:53:13.018105 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4637
I20250812 01:53:13.043704 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4770
I20250812 01:53:13.070842 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4411
2025-08-12T01:53:13Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (12163 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250812 01:53:13.124424 2345 test_util.cc:276] Using random seed: 1301273131
I20250812 01:53:13.128815 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:53:13.129017 2345 ts_itest-base.cc:116] --------------
I20250812 01:53:13.129197 2345 ts_itest-base.cc:117] 3 tablet servers
I20250812 01:53:13.129344 2345 ts_itest-base.cc:118] 3 replicas per TS
I20250812 01:53:13.129485 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:53:13Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:53:13Z Disabled control of system clock
I20250812 01:53:13.185178 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:33421
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:33421 with env {}
W20250812 01:53:13.491864 4970 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:13.492483 4970 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:13.492973 4970 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:13.523932 4970 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:53:13.524257 4970 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:13.524533 4970 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:53:13.524789 4970 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:53:13.561321 4970 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:33421
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:33421
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:13.562644 4970 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:13.564260 4970 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:13.574748 4977 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:13.575650 4978 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:14.909327 4980 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:14.912629 4979 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1335 milliseconds
W20250812 01:53:14.913816 4970 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.339s user 0.451s sys 0.877s
W20250812 01:53:14.914098 4970 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.339s user 0.451s sys 0.877s
I20250812 01:53:14.914315 4970 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:14.915390 4970 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:14.918068 4970 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:14.919432 4970 hybrid_clock.cc:648] HybridClock initialized: now 1754963594919387 us; error 62 us; skew 500 ppm
I20250812 01:53:14.920228 4970 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:14.928154 4970 webserver.cc:489] Webserver started at http://127.2.74.126:43517/ using document root <none> and password file <none>
I20250812 01:53:14.929195 4970 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:14.929425 4970 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:14.929862 4970 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:14.934348 4970 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "427d60813b5b4e92afb85a2eb2ec2521"
format_stamp: "Formatted at 2025-08-12 01:53:14 on dist-test-slave-3nxt"
I20250812 01:53:14.935444 4970 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "427d60813b5b4e92afb85a2eb2ec2521"
format_stamp: "Formatted at 2025-08-12 01:53:14 on dist-test-slave-3nxt"
I20250812 01:53:14.943430 4970 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.005s
I20250812 01:53:14.949900 4987 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:14.951124 4970 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:53:14.951457 4970 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "427d60813b5b4e92afb85a2eb2ec2521"
format_stamp: "Formatted at 2025-08-12 01:53:14 on dist-test-slave-3nxt"
I20250812 01:53:14.951788 4970 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:15.022971 4970 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:15.024480 4970 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:15.024979 4970 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:15.097167 4970 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:33421
I20250812 01:53:15.097222 5038 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:33421 every 8 connection(s)
I20250812 01:53:15.099973 4970 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:53:15.105401 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 4970
I20250812 01:53:15.105635 5039 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:15.106091 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:53:15.132174 5039 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap starting.
I20250812 01:53:15.138155 5039 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:15.140641 5039 log.cc:826] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:15.145532 5039 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: No bootstrap required, opened a new log
I20250812 01:53:15.163683 5039 raft_consensus.cc:357] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:15.164837 5039 raft_consensus.cc:383] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:15.165233 5039 raft_consensus.cc:738] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Initialized, Role: FOLLOWER
I20250812 01:53:15.166020 5039 consensus_queue.cc:260] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:15.166518 5039 raft_consensus.cc:397] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:15.166826 5039 raft_consensus.cc:491] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:15.167135 5039 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:15.171428 5039 raft_consensus.cc:513] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:15.172194 5039 leader_election.cc:304] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 427d60813b5b4e92afb85a2eb2ec2521; no voters:
I20250812 01:53:15.173977 5039 leader_election.cc:290] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:53:15.174865 5044 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:53:15.177436 5044 raft_consensus.cc:695] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 1 LEADER]: Becoming Leader. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Running, Role: LEADER
I20250812 01:53:15.178282 5039 sys_catalog.cc:564] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:53:15.178254 5044 consensus_queue.cc:237] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:15.186484 5045 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:15.187247 5045 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:15.188714 5046 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 427d60813b5b4e92afb85a2eb2ec2521. Latest consensus state: current_term: 1 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:15.189622 5046 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:15.195953 5053 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:53:15.209242 5053 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:53:15.225219 5053 catalog_manager.cc:1349] Generated new cluster ID: bec667d7ad814733a500bc3be06ede07
I20250812 01:53:15.225611 5053 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:53:15.261741 5053 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:53:15.263803 5053 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:53:15.290302 5053 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Generated new TSK 0
I20250812 01:53:15.291263 5053 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:53:15.302716 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:33421
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250812 01:53:15.644698 5063 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:15.645222 5063 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:15.645718 5063 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:15.678519 5063 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:15.679509 5063 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:53:15.716504 5063 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:15.717890 5063 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:15.719664 5063 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:15.732203 5069 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:17.243237 5063 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.510s user 0.551s sys 0.907s
W20250812 01:53:17.243747 5063 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.511s user 0.551s sys 0.907s
W20250812 01:53:17.135361 5068 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 5063
W20250812 01:53:15.734086 5070 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:17.245190 5071 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1509 milliseconds
I20250812 01:53:17.245683 5063 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250812 01:53:17.245760 5072 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:17.249346 5063 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:17.251874 5063 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:17.253381 5063 hybrid_clock.cc:648] HybridClock initialized: now 1754963597253336 us; error 54 us; skew 500 ppm
I20250812 01:53:17.254192 5063 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:17.260372 5063 webserver.cc:489] Webserver started at http://127.2.74.65:37175/ using document root <none> and password file <none>
I20250812 01:53:17.261377 5063 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:17.261616 5063 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:17.262054 5063 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:17.266500 5063 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "75adb7b24ff64a85957dcaf4bdd728d1"
format_stamp: "Formatted at 2025-08-12 01:53:17 on dist-test-slave-3nxt"
I20250812 01:53:17.267668 5063 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "75adb7b24ff64a85957dcaf4bdd728d1"
format_stamp: "Formatted at 2025-08-12 01:53:17 on dist-test-slave-3nxt"
I20250812 01:53:17.275497 5063 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250812 01:53:17.282011 5079 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:17.283301 5063 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.003s sys 0.002s
I20250812 01:53:17.283654 5063 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "75adb7b24ff64a85957dcaf4bdd728d1"
format_stamp: "Formatted at 2025-08-12 01:53:17 on dist-test-slave-3nxt"
I20250812 01:53:17.284026 5063 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:17.331396 5063 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:17.332926 5063 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:17.333412 5063 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:17.336534 5063 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:17.340905 5063 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:53:17.341151 5063 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:17.341418 5063 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:53:17.341595 5063 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:17.499137 5063 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:39813
I20250812 01:53:17.499253 5191 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:39813 every 8 connection(s)
I20250812 01:53:17.501924 5063 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:53:17.508387 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5063
I20250812 01:53:17.508896 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:53:17.516392 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:33421
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:17.527279 5192 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:17.527714 5192 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:17.528792 5192 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:17.531279 5004 ts_manager.cc:194] Registered new tserver with Master: 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65:39813)
I20250812 01:53:17.533309 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:37257
W20250812 01:53:17.835350 5196 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:17.835894 5196 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:17.836428 5196 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:17.869573 5196 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:17.870458 5196 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:53:17.906414 5196 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:17.907756 5196 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:17.909371 5196 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:17.921813 5202 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:18.536780 5192 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
W20250812 01:53:17.923171 5203 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:17.926553 5205 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:19.077373 5204 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250812 01:53:19.077487 5196 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:19.081640 5196 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:19.084347 5196 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:19.085788 5196 hybrid_clock.cc:648] HybridClock initialized: now 1754963599085757 us; error 66 us; skew 500 ppm
I20250812 01:53:19.086585 5196 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:19.093237 5196 webserver.cc:489] Webserver started at http://127.2.74.66:45969/ using document root <none> and password file <none>
I20250812 01:53:19.094149 5196 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:19.094372 5196 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:19.094820 5196 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:19.099336 5196 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "8b67450f926a4305baa2491c3514ea10"
format_stamp: "Formatted at 2025-08-12 01:53:19 on dist-test-slave-3nxt"
I20250812 01:53:19.100430 5196 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "8b67450f926a4305baa2491c3514ea10"
format_stamp: "Formatted at 2025-08-12 01:53:19 on dist-test-slave-3nxt"
I20250812 01:53:19.108076 5196 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.001s
I20250812 01:53:19.113790 5212 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:19.114799 5196 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250812 01:53:19.115125 5196 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "8b67450f926a4305baa2491c3514ea10"
format_stamp: "Formatted at 2025-08-12 01:53:19 on dist-test-slave-3nxt"
I20250812 01:53:19.115468 5196 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:19.163167 5196 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:19.164670 5196 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:19.165133 5196 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:19.167623 5196 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:19.171756 5196 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:53:19.171984 5196 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:19.172246 5196 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:53:19.172411 5196 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:19.307003 5196 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:36061
I20250812 01:53:19.307097 5324 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:36061 every 8 connection(s)
I20250812 01:53:19.309763 5196 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:53:19.315745 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5196
I20250812 01:53:19.316277 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250812 01:53:19.323292 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:33421
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:19.334867 5325 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:19.335331 5325 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:19.336359 5325 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:19.338413 5004 ts_manager.cc:194] Registered new tserver with Master: 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:19.339665 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:59365
W20250812 01:53:19.633054 5329 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:19.633582 5329 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:19.634096 5329 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:19.666383 5329 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:19.667253 5329 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:53:19.703393 5329 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:19.704738 5329 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:19.706319 5329 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:19.718822 5335 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:20.343796 5325 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
W20250812 01:53:19.719328 5336 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:20.994637 5338 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:20.997987 5337 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1273 milliseconds
W20250812 01:53:20.998224 5329 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.280s user 0.420s sys 0.858s
W20250812 01:53:20.998559 5329 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.280s user 0.420s sys 0.858s
I20250812 01:53:20.998788 5329 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:20.999832 5329 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:21.002457 5329 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:21.003870 5329 hybrid_clock.cc:648] HybridClock initialized: now 1754963601003837 us; error 55 us; skew 500 ppm
I20250812 01:53:21.004838 5329 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:21.012863 5329 webserver.cc:489] Webserver started at http://127.2.74.67:38669/ using document root <none> and password file <none>
I20250812 01:53:21.014086 5329 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:21.014468 5329 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:21.015151 5329 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:53:21.020088 5329 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "24bab58009374a4f9c7794a5c6b58664"
format_stamp: "Formatted at 2025-08-12 01:53:21 on dist-test-slave-3nxt"
I20250812 01:53:21.021275 5329 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "24bab58009374a4f9c7794a5c6b58664"
format_stamp: "Formatted at 2025-08-12 01:53:21 on dist-test-slave-3nxt"
I20250812 01:53:21.029589 5329 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.002s
I20250812 01:53:21.035944 5345 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:21.037267 5329 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250812 01:53:21.037636 5329 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "24bab58009374a4f9c7794a5c6b58664"
format_stamp: "Formatted at 2025-08-12 01:53:21 on dist-test-slave-3nxt"
I20250812 01:53:21.037988 5329 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:21.115573 5329 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:21.117592 5329 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:21.118196 5329 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:21.121464 5329 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:21.126121 5329 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:53:21.126394 5329 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:21.126643 5329 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:53:21.126785 5329 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:21.264446 5329 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:35519
I20250812 01:53:21.264550 5457 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:35519 every 8 connection(s)
I20250812 01:53:21.267390 5329 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:53:21.269881 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5329
I20250812 01:53:21.270450 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250812 01:53:21.293067 5458 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:21.293483 5458 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:21.294456 5458 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:21.296667 5004 ts_manager.cc:194] Registered new tserver with Master: 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:21.298091 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:46621
I20250812 01:53:21.306674 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:21.342175 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:21.342540 2345 test_util.cc:276] Using random seed: 1309491255
I20250812 01:53:21.384097 5004 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:43484:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250812 01:53:21.428555 5260 tablet_service.cc:1468] Processing CreateTablet for tablet 7c3a45123f804700b0747994b958cf8c (DEFAULT_TABLE table=TestTable [id=d550f28f10034368af8872e8f43b9f7a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:53:21.430105 5260 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7c3a45123f804700b0747994b958cf8c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:21.450029 5478 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap starting.
I20250812 01:53:21.455554 5478 tablet_bootstrap.cc:654] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:21.457544 5478 log.cc:826] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:21.462319 5478 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: No bootstrap required, opened a new log
I20250812 01:53:21.462764 5478 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Time spent bootstrapping tablet: real 0.013s user 0.008s sys 0.003s
I20250812 01:53:21.481012 5478 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:21.481624 5478 raft_consensus.cc:383] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:21.481878 5478 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Initialized, Role: FOLLOWER
I20250812 01:53:21.482659 5478 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:21.483187 5478 raft_consensus.cc:397] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:21.483480 5478 raft_consensus.cc:491] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:21.483831 5478 raft_consensus.cc:3058] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:21.488620 5478 raft_consensus.cc:513] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:21.489365 5478 leader_election.cc:304] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b67450f926a4305baa2491c3514ea10; no voters:
I20250812 01:53:21.491132 5478 leader_election.cc:290] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:53:21.491519 5480 raft_consensus.cc:2802] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:53:21.493763 5480 raft_consensus.cc:695] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 LEADER]: Becoming Leader. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Running, Role: LEADER
I20250812 01:53:21.494736 5480 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:21.495222 5478 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Time spent starting tablet: real 0.032s user 0.030s sys 0.002s
I20250812 01:53:21.508776 5004 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b67450f926a4305baa2491c3514ea10 (127.2.74.66). New cstate: current_term: 1 leader_uuid: "8b67450f926a4305baa2491c3514ea10" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: HEALTHY } } }
I20250812 01:53:21.729794 2345 test_util.cc:276] Using random seed: 1309878501
I20250812 01:53:21.751753 4999 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:43492:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250812 01:53:21.781363 5393 tablet_service.cc:1468] Processing CreateTablet for tablet 0befab116793421e8a070f9325978c9e (DEFAULT_TABLE table=TestTable1 [id=1bbbce981dc344ca95a1c7415f18ebec]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:53:21.782893 5393 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0befab116793421e8a070f9325978c9e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:21.803474 5499 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap starting.
I20250812 01:53:21.809036 5499 tablet_bootstrap.cc:654] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:21.810719 5499 log.cc:826] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:21.815021 5499 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: No bootstrap required, opened a new log
I20250812 01:53:21.815452 5499 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Time spent bootstrapping tablet: real 0.012s user 0.005s sys 0.005s
I20250812 01:53:21.833467 5499 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:21.834030 5499 raft_consensus.cc:383] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:21.834226 5499 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Initialized, Role: FOLLOWER
I20250812 01:53:21.834843 5499 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:21.835335 5499 raft_consensus.cc:397] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:21.835610 5499 raft_consensus.cc:491] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:21.835916 5499 raft_consensus.cc:3058] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:21.840075 5499 raft_consensus.cc:513] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:21.840816 5499 leader_election.cc:304] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 24bab58009374a4f9c7794a5c6b58664; no voters:
I20250812 01:53:21.842592 5499 leader_election.cc:290] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:53:21.843083 5501 raft_consensus.cc:2802] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:53:21.845697 5501 raft_consensus.cc:695] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 LEADER]: Becoming Leader. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Running, Role: LEADER
I20250812 01:53:21.846832 5499 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Time spent starting tablet: real 0.031s user 0.030s sys 0.003s
I20250812 01:53:21.846284 5458 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
I20250812 01:53:21.849174 5501 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:21.858177 4999 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 reported cstate change: term changed from 0 to 1, leader changed from <none> to 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67). New cstate: current_term: 1 leader_uuid: "24bab58009374a4f9c7794a5c6b58664" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: HEALTHY } } }
W20250812 01:53:21.968072 5321 debug-util.cc:398] Leaking SignalData structure 0x7b08000b3060 after lost signal to thread 5197
I20250812 01:53:22.040334 2345 test_util.cc:276] Using random seed: 1310189043
I20250812 01:53:22.066881 4997 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:43502:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250812 01:53:22.102398 5127 tablet_service.cc:1468] Processing CreateTablet for tablet 5a886d1478434d1fa5960c6fad954082 (DEFAULT_TABLE table=TestTable2 [id=184467c7bd934ef09a88bd6b257ef354]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:53:22.104445 5127 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5a886d1478434d1fa5960c6fad954082. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:22.137732 5519 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:22.145990 5519 tablet_bootstrap.cc:654] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Neither blocks nor log segments found. Creating new log.
I20250812 01:53:22.148430 5519 log.cc:826] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:22.155031 5519 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: No bootstrap required, opened a new log
I20250812 01:53:22.155578 5519 ts_tablet_manager.cc:1397] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.019s user 0.010s sys 0.005s
I20250812 01:53:22.185055 5519 raft_consensus.cc:357] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:22.185897 5519 raft_consensus.cc:383] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:53:22.186228 5519 raft_consensus.cc:738] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: FOLLOWER
I20250812 01:53:22.187196 5519 consensus_queue.cc:260] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:22.187975 5519 raft_consensus.cc:397] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:22.188361 5519 raft_consensus.cc:491] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:22.188838 5519 raft_consensus.cc:3058] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:53:22.196393 5519 raft_consensus.cc:513] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:22.197480 5519 leader_election.cc:304] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1; no voters:
I20250812 01:53:22.199781 5519 leader_election.cc:290] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:53:22.200125 5521 raft_consensus.cc:2802] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:53:22.206719 5519 ts_tablet_manager.cc:1428] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.051s user 0.051s sys 0.003s
I20250812 01:53:22.207405 5521 raft_consensus.cc:695] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 LEADER]: Becoming Leader. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Running, Role: LEADER
I20250812 01:53:22.208388 5521 consensus_queue.cc:237] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:22.221649 4997 catalog_manager.cc:5582] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: term changed from 0 to 1, leader changed from <none> to 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65). New cstate: current_term: 1 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } health_report { overall_health: HEALTHY } } }
I20250812 01:53:22.426198 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 4970
W20250812 01:53:22.541854 5325 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:33421 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:33421: connect: Connection refused (error 111)
W20250812 01:53:22.876098 5458 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:33421 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:33421: connect: Connection refused (error 111)
W20250812 01:53:23.259094 5192 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:33421 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:33421: connect: Connection refused (error 111)
I20250812 01:53:27.782179 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5063
I20250812 01:53:27.806286 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5196
I20250812 01:53:27.835144 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5329
I20250812 01:53:27.862490 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:33421
--webserver_interface=127.2.74.126
--webserver_port=43517
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:33421 with env {}
W20250812 01:53:28.164986 5600 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:28.165769 5600 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:28.166245 5600 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:28.198513 5600 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:53:28.198892 5600 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:28.199180 5600 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:53:28.199429 5600 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:53:28.236361 5600 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:33421
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:33421
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=43517
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:28.238409 5600 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:28.240571 5600 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:28.254630 5606 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:28.255077 5607 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:29.469841 5600 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.215s user 0.337s sys 0.868s
W20250812 01:53:29.470556 5600 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.216s user 0.338s sys 0.869s
W20250812 01:53:29.471871 5609 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:29.473170 5608 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1216 milliseconds
I20250812 01:53:29.473261 5600 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:29.474892 5600 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:29.478330 5600 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:29.479861 5600 hybrid_clock.cc:648] HybridClock initialized: now 1754963609479790 us; error 70 us; skew 500 ppm
I20250812 01:53:29.481134 5600 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:29.490881 5600 webserver.cc:489] Webserver started at http://127.2.74.126:43517/ using document root <none> and password file <none>
I20250812 01:53:29.492491 5600 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:29.493005 5600 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:29.505093 5600 fs_manager.cc:714] Time spent opening directory manager: real 0.007s user 0.006s sys 0.002s
I20250812 01:53:29.511593 5616 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:29.512970 5600 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.004s sys 0.003s
I20250812 01:53:29.513424 5600 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "427d60813b5b4e92afb85a2eb2ec2521"
format_stamp: "Formatted at 2025-08-12 01:53:14 on dist-test-slave-3nxt"
I20250812 01:53:29.516477 5600 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:29.602825 5600 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:29.604977 5600 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:29.605626 5600 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:29.682027 5600 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:33421
I20250812 01:53:29.682098 5667 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:33421 every 8 connection(s)
I20250812 01:53:29.684954 5600 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:53:29.691746 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5600
I20250812 01:53:29.693231 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:39813
--local_ip_for_outbound_sockets=127.2.74.65
--tserver_master_addrs=127.2.74.126:33421
--webserver_port=37175
--webserver_interface=127.2.74.65
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:29.695129 5668 sys_catalog.cc:263] Verifying existing consensus state
I20250812 01:53:29.699959 5668 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap starting.
I20250812 01:53:29.715036 5668 log.cc:826] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:29.769119 5668 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:29.769955 5668 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap complete.
I20250812 01:53:29.791416 5668 raft_consensus.cc:357] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:29.793588 5668 raft_consensus.cc:738] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Initialized, Role: FOLLOWER
I20250812 01:53:29.794396 5668 consensus_queue.cc:260] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:29.794901 5668 raft_consensus.cc:397] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:29.795151 5668 raft_consensus.cc:491] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:29.795459 5668 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 2 FOLLOWER]: Advancing to term 3
I20250812 01:53:29.801369 5668 raft_consensus.cc:513] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:29.802065 5668 leader_election.cc:304] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 427d60813b5b4e92afb85a2eb2ec2521; no voters:
I20250812 01:53:29.803761 5668 leader_election.cc:290] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250812 01:53:29.804230 5672 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Leader election won for term 3
I20250812 01:53:29.807237 5672 raft_consensus.cc:695] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 LEADER]: Becoming Leader. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Running, Role: LEADER
I20250812 01:53:29.807853 5668 sys_catalog.cc:564] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:53:29.808194 5672 consensus_queue.cc:237] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:29.817334 5673 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:29.817687 5674 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 427d60813b5b4e92afb85a2eb2ec2521. Latest consensus state: current_term: 3 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:29.818163 5673 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:29.818449 5674 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:29.826604 5679 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:53:29.840162 5679 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=184467c7bd934ef09a88bd6b257ef354]
I20250812 01:53:29.841979 5679 catalog_manager.cc:671] Loaded metadata for table TestTable [id=a08508d88c154ed08d8fa9bb6c8e22cb]
I20250812 01:53:29.843600 5679 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=ad8d0e50b8e84bef849de7defdd7858f]
I20250812 01:53:29.851601 5679 tablet_loader.cc:96] loaded metadata for tablet 0befab116793421e8a070f9325978c9e (table TestTable1 [id=ad8d0e50b8e84bef849de7defdd7858f])
I20250812 01:53:29.853060 5679 tablet_loader.cc:96] loaded metadata for tablet 5a886d1478434d1fa5960c6fad954082 (table TestTable2 [id=184467c7bd934ef09a88bd6b257ef354])
I20250812 01:53:29.854348 5679 tablet_loader.cc:96] loaded metadata for tablet 7c3a45123f804700b0747994b958cf8c (table TestTable [id=a08508d88c154ed08d8fa9bb6c8e22cb])
I20250812 01:53:29.855769 5679 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:53:29.861064 5679 catalog_manager.cc:1261] Loaded cluster ID: bec667d7ad814733a500bc3be06ede07
I20250812 01:53:29.861420 5679 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:53:29.869652 5679 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:53:29.875154 5679 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Loaded TSK: 0
I20250812 01:53:29.876917 5679 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250812 01:53:30.052765 5670 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:30.053292 5670 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:30.053798 5670 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:30.085510 5670 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:30.086395 5670 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:53:30.121343 5670 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:39813
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=37175
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:30.122646 5670 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:30.124276 5670 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:30.136962 5695 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:30.139748 5696 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:30.140697 5698 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:31.748970 5697 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1608 milliseconds
I20250812 01:53:31.749073 5670 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:31.750296 5670 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:31.752975 5670 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:31.754339 5670 hybrid_clock.cc:648] HybridClock initialized: now 1754963611754319 us; error 45 us; skew 500 ppm
I20250812 01:53:31.755118 5670 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:31.762276 5670 webserver.cc:489] Webserver started at http://127.2.74.65:37175/ using document root <none> and password file <none>
I20250812 01:53:31.763213 5670 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:31.763433 5670 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:31.771131 5670 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250812 01:53:31.775772 5705 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:31.776906 5670 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250812 01:53:31.777259 5670 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "75adb7b24ff64a85957dcaf4bdd728d1"
format_stamp: "Formatted at 2025-08-12 01:53:17 on dist-test-slave-3nxt"
I20250812 01:53:31.779333 5670 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:31.829484 5670 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:31.830955 5670 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:31.831435 5670 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:31.833974 5670 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:31.839534 5712 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250812 01:53:31.847049 5670 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250812 01:53:31.847316 5670 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.001s
I20250812 01:53:31.847582 5670 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250812 01:53:31.852322 5670 ts_tablet_manager.cc:610] Registered 1 tablets
I20250812 01:53:31.852535 5670 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.002s sys 0.000s
I20250812 01:53:31.852931 5712 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:31.908347 5712 log.cc:826] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:32.005836 5712 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:32.006628 5712 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap complete.
I20250812 01:53:32.008229 5712 ts_tablet_manager.cc:1397] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.156s user 0.123s sys 0.028s
I20250812 01:53:32.025348 5670 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:39813
I20250812 01:53:32.025790 5819 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:39813 every 8 connection(s)
I20250812 01:53:32.028034 5670 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:53:32.026854 5712 raft_consensus.cc:357] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:32.029065 5712 raft_consensus.cc:738] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: FOLLOWER
I20250812 01:53:32.030059 5712 consensus_queue.cc:260] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:32.030848 5712 raft_consensus.cc:397] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:32.031244 5712 raft_consensus.cc:491] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:32.031705 5712 raft_consensus.cc:3058] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:53:32.036527 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5670
I20250812 01:53:32.038143 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:36061
--local_ip_for_outbound_sockets=127.2.74.66
--tserver_master_addrs=127.2.74.126:33421
--webserver_port=45969
--webserver_interface=127.2.74.66
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:32.040854 5712 raft_consensus.cc:513] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:32.041534 5712 leader_election.cc:304] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1; no voters:
I20250812 01:53:32.043638 5712 leader_election.cc:290] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250812 01:53:32.044560 5824 raft_consensus.cc:2802] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:53:32.048365 5824 raft_consensus.cc:695] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEADER]: Becoming Leader. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Running, Role: LEADER
I20250812 01:53:32.049679 5824 consensus_queue.cc:237] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:32.055936 5712 ts_tablet_manager.cc:1428] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.047s user 0.027s sys 0.020s
I20250812 01:53:32.058686 5820 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:32.059247 5820 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:32.060792 5820 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:32.066553 5633 ts_manager.cc:194] Registered new tserver with Master: 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65:39813)
I20250812 01:53:32.071208 5633 catalog_manager.cc:5582] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } health_report { overall_health: HEALTHY } } }
I20250812 01:53:32.114917 5633 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:56305
I20250812 01:53:32.118873 5820 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
W20250812 01:53:32.364360 5827 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:32.364862 5827 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:32.365324 5827 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:32.396572 5827 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:32.397526 5827 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:53:32.434609 5827 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:36061
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=45969
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:32.435842 5827 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:32.437458 5827 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:32.449932 5839 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:32.450670 5840 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:33.662200 5842 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:33.666173 5841 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1215 milliseconds
I20250812 01:53:33.666265 5827 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:33.667502 5827 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:33.670187 5827 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:33.671612 5827 hybrid_clock.cc:648] HybridClock initialized: now 1754963613671569 us; error 53 us; skew 500 ppm
I20250812 01:53:33.672386 5827 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:33.678871 5827 webserver.cc:489] Webserver started at http://127.2.74.66:45969/ using document root <none> and password file <none>
I20250812 01:53:33.679826 5827 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:33.680136 5827 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:33.688377 5827 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.002s sys 0.005s
I20250812 01:53:33.693341 5850 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:33.694444 5827 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250812 01:53:33.694765 5827 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "8b67450f926a4305baa2491c3514ea10"
format_stamp: "Formatted at 2025-08-12 01:53:19 on dist-test-slave-3nxt"
I20250812 01:53:33.696767 5827 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:33.753973 5827 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:33.755520 5827 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:33.755962 5827 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:33.758677 5827 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:33.764520 5857 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250812 01:53:33.772351 5827 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250812 01:53:33.772689 5827 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.002s sys 0.000s
I20250812 01:53:33.772976 5827 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250812 01:53:33.777887 5827 ts_tablet_manager.cc:610] Registered 1 tablets
I20250812 01:53:33.778090 5827 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.004s sys 0.000s
I20250812 01:53:33.778545 5857 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap starting.
I20250812 01:53:33.836228 5857 log.cc:826] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:33.937383 5857 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:33.938366 5857 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap complete.
I20250812 01:53:33.940431 5857 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Time spent bootstrapping tablet: real 0.162s user 0.128s sys 0.028s
I20250812 01:53:33.963371 5827 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:36061
I20250812 01:53:33.963549 5964 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:36061 every 8 connection(s)
I20250812 01:53:33.961130 5857 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:33.964314 5857 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Initialized, Role: FOLLOWER
I20250812 01:53:33.965077 5857 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:33.965538 5857 raft_consensus.cc:397] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:33.965772 5857 raft_consensus.cc:491] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:33.966058 5857 raft_consensus.cc:3058] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:53:33.966146 5827 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:53:33.973862 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5827
I20250812 01:53:33.974184 5857 raft_consensus.cc:513] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:33.974974 5857 leader_election.cc:304] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b67450f926a4305baa2491c3514ea10; no voters:
I20250812 01:53:33.975821 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:35519
--local_ip_for_outbound_sockets=127.2.74.67
--tserver_master_addrs=127.2.74.126:33421
--webserver_port=38669
--webserver_interface=127.2.74.67
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:33.977840 5857 leader_election.cc:290] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250812 01:53:33.978237 5969 raft_consensus.cc:2802] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:53:33.983901 5969 raft_consensus.cc:695] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 LEADER]: Becoming Leader. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Running, Role: LEADER
I20250812 01:53:33.985172 5969 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } }
I20250812 01:53:33.988121 5965 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:33.988741 5965 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:33.990559 5965 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:33.991322 5857 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Time spent starting tablet: real 0.051s user 0.040s sys 0.008s
I20250812 01:53:33.995486 5633 ts_manager.cc:194] Registered new tserver with Master: 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:33.996548 5633 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: term changed from 0 to 2, leader changed from <none> to 8b67450f926a4305baa2491c3514ea10 (127.2.74.66), VOTER 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) added. New cstate: current_term: 2 leader_uuid: "8b67450f926a4305baa2491c3514ea10" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: HEALTHY } } }
I20250812 01:53:34.011085 5633 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:41207
I20250812 01:53:34.015821 5965 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
I20250812 01:53:34.028038 5920 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:34.031474 5970 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index -1 to 10, NON_VOTER 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } } }
I20250812 01:53:34.041304 5620 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 7c3a45123f804700b0747994b958cf8c with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250812 01:53:34.050861 5633 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: config changed from index -1 to 10, NON_VOTER 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) added. New cstate: current_term: 2 leader_uuid: "8b67450f926a4305baa2491c3514ea10" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250812 01:53:34.059183 5854 consensus_peers.cc:489] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 -> Peer 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65:39813): Couldn't send request to peer 75adb7b24ff64a85957dcaf4bdd728d1. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 7c3a45123f804700b0747994b958cf8c. This is attempt 1: this message will repeat every 5th retry.
W20250812 01:53:34.059900 5633 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 7c3a45123f804700b0747994b958cf8c with cas_config_opid_index 10: no extra replica candidate found for tablet 7c3a45123f804700b0747994b958cf8c (table TestTable [id=a08508d88c154ed08d8fa9bb6c8e22cb]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (2) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
W20250812 01:53:34.314174 5972 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:34.314687 5972 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:34.315183 5972 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:34.350050 5972 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:34.350869 5972 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:53:34.385748 5972 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:35519
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=38669
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:34.387048 5972 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:34.388630 5972 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:34.400035 5987 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:34.487509 5993 ts_tablet_manager.cc:927] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Initiating tablet copy from peer 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:34.497545 5993 tablet_copy_client.cc:323] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Beginning tablet copy session from remote peer at address 127.2.74.66:36061
I20250812 01:53:34.520622 5940 tablet_copy_service.cc:140] P 8b67450f926a4305baa2491c3514ea10: Received BeginTabletCopySession request for tablet 7c3a45123f804700b0747994b958cf8c from peer 75adb7b24ff64a85957dcaf4bdd728d1 ({username='slave'} at 127.2.74.65:50537)
I20250812 01:53:34.521423 5940 tablet_copy_service.cc:161] P 8b67450f926a4305baa2491c3514ea10: Beginning new tablet copy session on tablet 7c3a45123f804700b0747994b958cf8c from peer 75adb7b24ff64a85957dcaf4bdd728d1 at {username='slave'} at 127.2.74.65:50537: session id = 75adb7b24ff64a85957dcaf4bdd728d1-7c3a45123f804700b0747994b958cf8c
I20250812 01:53:34.531790 5940 tablet_copy_source_session.cc:215] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Tablet Copy: opened 0 blocks and 1 log segments
I20250812 01:53:34.537880 5993 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7c3a45123f804700b0747994b958cf8c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:34.562970 5993 tablet_copy_client.cc:806] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Starting download of 0 data blocks...
I20250812 01:53:34.563786 5993 tablet_copy_client.cc:670] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Starting download of 1 WAL segments...
I20250812 01:53:34.570071 5993 tablet_copy_client.cc:538] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250812 01:53:34.582777 5993 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:34.771991 5993 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:34.776499 5993 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap complete.
I20250812 01:53:34.777496 5993 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.195s user 0.166s sys 0.012s
I20250812 01:53:34.779989 5993 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:34.780828 5993 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: LEARNER
I20250812 01:53:34.781594 5993 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:34.798705 5993 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.021s user 0.005s sys 0.012s
I20250812 01:53:34.804106 5940 tablet_copy_service.cc:342] P 8b67450f926a4305baa2491c3514ea10: Request end of tablet copy session 75adb7b24ff64a85957dcaf4bdd728d1-7c3a45123f804700b0747994b958cf8c received from {username='slave'} at 127.2.74.65:50537
I20250812 01:53:34.804858 5940 tablet_copy_service.cc:434] P 8b67450f926a4305baa2491c3514ea10: ending tablet copy session 75adb7b24ff64a85957dcaf4bdd728d1-7c3a45123f804700b0747994b958cf8c on tablet 7c3a45123f804700b0747994b958cf8c with peer 75adb7b24ff64a85957dcaf4bdd728d1
I20250812 01:53:35.048792 5775 raft_consensus.cc:1215] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
I20250812 01:53:35.435645 5998 raft_consensus.cc:1062] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: attempting to promote NON_VOTER 75adb7b24ff64a85957dcaf4bdd728d1 to VOTER
I20250812 01:53:35.438120 5998 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:35.456197 5775 raft_consensus.cc:1273] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Refusing update from remote peer 8b67450f926a4305baa2491c3514ea10: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250812 01:53:35.459717 5999 consensus_queue.cc:1035] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Connected to new peer: Peer: permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
I20250812 01:53:35.473204 5999 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:35.483532 5775 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:35.502399 5633 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "8b67450f926a4305baa2491c3514ea10" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250812 01:53:35.804317 5986 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 5972
W20250812 01:53:36.256145 5986 kernel_stack_watchdog.cc:198] Thread 5972 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:53:34.401655 5988 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:36.257656 5989 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1856 milliseconds
W20250812 01:53:36.257730 5972 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.857s user 0.679s sys 1.133s
W20250812 01:53:36.258093 5972 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.857s user 0.679s sys 1.133s
W20250812 01:53:36.258683 5990 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:36.258687 5972 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:36.262245 5972 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:36.264293 5972 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:36.265623 5972 hybrid_clock.cc:648] HybridClock initialized: now 1754963616265587 us; error 53 us; skew 500 ppm
I20250812 01:53:36.266393 5972 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:36.272624 5972 webserver.cc:489] Webserver started at http://127.2.74.67:38669/ using document root <none> and password file <none>
I20250812 01:53:36.273970 5972 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:36.274225 5972 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:36.283205 5972 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250812 01:53:36.288097 6012 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:36.289312 5972 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250812 01:53:36.289652 5972 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "24bab58009374a4f9c7794a5c6b58664"
format_stamp: "Formatted at 2025-08-12 01:53:21 on dist-test-slave-3nxt"
I20250812 01:53:36.291620 5972 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:36.352470 5972 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:36.353976 5972 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:36.354408 5972 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:36.356884 5972 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:36.359217 5618 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 7c3a45123f804700b0747994b958cf8c with cas_config_opid_index 10: aborting the task: latest config opid_index 11; task opid_index 10
I20250812 01:53:36.362459 6019 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250812 01:53:36.369953 5972 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250812 01:53:36.370211 5972 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250812 01:53:36.370498 5972 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250812 01:53:36.375205 5972 ts_tablet_manager.cc:610] Registered 1 tablets
I20250812 01:53:36.375415 5972 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.000s
I20250812 01:53:36.375842 6019 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap starting.
I20250812 01:53:36.428390 6019 log.cc:826] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:36.541131 5972 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:35519
I20250812 01:53:36.541234 6126 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:35519 every 8 connection(s)
I20250812 01:53:36.544564 5972 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:53:36.548444 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 5972
I20250812 01:53:36.559538 6019 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:36.563611 6019 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap complete.
I20250812 01:53:36.565438 6019 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Time spent bootstrapping tablet: real 0.190s user 0.132s sys 0.050s
I20250812 01:53:36.580395 6127 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:36.580837 6127 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:36.582027 6127 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:36.582470 6019 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:36.584414 6019 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Initialized, Role: FOLLOWER
I20250812 01:53:36.585158 6019 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:36.585636 6019 raft_consensus.cc:397] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:36.585897 6019 raft_consensus.cc:491] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:36.586274 6019 raft_consensus.cc:3058] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:53:36.592519 6019 raft_consensus.cc:513] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:36.593444 6019 leader_election.cc:304] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 24bab58009374a4f9c7794a5c6b58664; no voters:
I20250812 01:53:36.593993 5633 ts_manager.cc:194] Registered new tserver with Master: 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:36.595693 6019 leader_election.cc:290] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250812 01:53:36.596084 6134 raft_consensus.cc:2802] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:53:36.597286 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:36.597556 5633 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:56693
I20250812 01:53:36.602165 6134 raft_consensus.cc:695] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEADER]: Becoming Leader. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Running, Role: LEADER
I20250812 01:53:36.602869 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:36.603624 6127 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
I20250812 01:53:36.603432 6134 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } }
I20250812 01:53:36.604475 6019 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Time spent starting tablet: real 0.039s user 0.025s sys 0.012s
W20250812 01:53:36.607061 2345 ts_itest-base.cc:209] found only 2 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER } interned_replicas { ts_info_idx: 1 role: FOLLOWER }
I20250812 01:53:36.611034 5633 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 reported cstate change: term changed from 0 to 2, leader changed from <none> to 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67), VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) added. New cstate: current_term: 2 leader_uuid: "24bab58009374a4f9c7794a5c6b58664" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: HEALTHY } } }
I20250812 01:53:36.635674 6082 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } }
I20250812 01:53:36.639518 6136 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEADER]: Committing config change with OpId 2.8: config changed from index -1 to 8, NON_VOTER 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) added. New config: { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } }
I20250812 01:53:36.646761 5618 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0befab116793421e8a070f9325978c9e with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250812 01:53:36.648985 6016 consensus_peers.cc:489] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 -> Peer 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061): Couldn't send request to peer 8b67450f926a4305baa2491c3514ea10. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0befab116793421e8a070f9325978c9e. This is attempt 1: this message will repeat every 5th retry.
I20250812 01:53:36.649801 5633 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 reported cstate change: config changed from index -1 to 8, NON_VOTER 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) added. New cstate: current_term: 2 leader_uuid: "24bab58009374a4f9c7794a5c6b58664" committed_config { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250812 01:53:36.659654 6082 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:36.663278 6135 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index 8 to 9, NON_VOTER 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) added. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } } }
W20250812 01:53:36.665581 6016 consensus_peers.cc:489] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 -> Peer 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061): Couldn't send request to peer 8b67450f926a4305baa2491c3514ea10. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0befab116793421e8a070f9325978c9e. This is attempt 1: this message will repeat every 5th retry.
I20250812 01:53:36.670435 5618 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0befab116793421e8a070f9325978c9e with cas_config_opid_index 8: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250812 01:53:36.674669 6016 consensus_peers.cc:489] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 -> Peer 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65:39813): Couldn't send request to peer 75adb7b24ff64a85957dcaf4bdd728d1. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0befab116793421e8a070f9325978c9e. This is attempt 1: this message will repeat every 5th retry.
I20250812 01:53:36.674103 5633 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 reported cstate change: config changed from index 8 to 9, NON_VOTER 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) added. New cstate: current_term: 2 leader_uuid: "24bab58009374a4f9c7794a5c6b58664" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250812 01:53:36.704432 5920 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: true } }
I20250812 01:53:36.712231 5775 raft_consensus.cc:1273] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Refusing update from remote peer 8b67450f926a4305baa2491c3514ea10: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250812 01:53:36.713617 6148 consensus_queue.cc:1035] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Connected to new peer: Peer: permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250812 01:53:36.721657 6133 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: true } } }
W20250812 01:53:36.723524 5852 consensus_peers.cc:489] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 -> Peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Couldn't send request to peer 24bab58009374a4f9c7794a5c6b58664. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 7c3a45123f804700b0747994b958cf8c. This is attempt 1: this message will repeat every 5th retry.
I20250812 01:53:36.723031 5775 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: true } } }
I20250812 01:53:36.729707 5620 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 7c3a45123f804700b0747994b958cf8c with cas_config_opid_index 11: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 7)
I20250812 01:53:36.732983 5633 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: config changed from index 11 to 12, NON_VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) added. New cstate: current_term: 2 leader_uuid: "8b67450f926a4305baa2491c3514ea10" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250812 01:53:37.148886 6152 ts_tablet_manager.cc:927] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Initiating tablet copy from peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:37.150281 6152 tablet_copy_client.cc:323] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: tablet copy: Beginning tablet copy session from remote peer at address 127.2.74.67:35519
I20250812 01:53:37.151774 6102 tablet_copy_service.cc:140] P 24bab58009374a4f9c7794a5c6b58664: Received BeginTabletCopySession request for tablet 0befab116793421e8a070f9325978c9e from peer 8b67450f926a4305baa2491c3514ea10 ({username='slave'} at 127.2.74.66:46357)
I20250812 01:53:37.152209 6102 tablet_copy_service.cc:161] P 24bab58009374a4f9c7794a5c6b58664: Beginning new tablet copy session on tablet 0befab116793421e8a070f9325978c9e from peer 8b67450f926a4305baa2491c3514ea10 at {username='slave'} at 127.2.74.66:46357: session id = 8b67450f926a4305baa2491c3514ea10-0befab116793421e8a070f9325978c9e
I20250812 01:53:37.156713 6102 tablet_copy_source_session.cc:215] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Tablet Copy: opened 0 blocks and 1 log segments
I20250812 01:53:37.159842 6152 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0befab116793421e8a070f9325978c9e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:37.169788 6152 tablet_copy_client.cc:806] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: tablet copy: Starting download of 0 data blocks...
I20250812 01:53:37.170302 6152 tablet_copy_client.cc:670] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: tablet copy: Starting download of 1 WAL segments...
I20250812 01:53:37.173856 6152 tablet_copy_client.cc:538] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250812 01:53:37.179289 6152 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Bootstrap starting.
I20250812 01:53:37.215345 6155 ts_tablet_manager.cc:927] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Initiating tablet copy from peer 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:37.217281 6155 tablet_copy_client.cc:323] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: tablet copy: Beginning tablet copy session from remote peer at address 127.2.74.66:36061
I20250812 01:53:37.218739 5940 tablet_copy_service.cc:140] P 8b67450f926a4305baa2491c3514ea10: Received BeginTabletCopySession request for tablet 7c3a45123f804700b0747994b958cf8c from peer 24bab58009374a4f9c7794a5c6b58664 ({username='slave'} at 127.2.74.67:45489)
I20250812 01:53:37.219144 5940 tablet_copy_service.cc:161] P 8b67450f926a4305baa2491c3514ea10: Beginning new tablet copy session on tablet 7c3a45123f804700b0747994b958cf8c from peer 24bab58009374a4f9c7794a5c6b58664 at {username='slave'} at 127.2.74.67:45489: session id = 24bab58009374a4f9c7794a5c6b58664-7c3a45123f804700b0747994b958cf8c
I20250812 01:53:37.223686 5940 tablet_copy_source_session.cc:215] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Tablet Copy: opened 0 blocks and 1 log segments
I20250812 01:53:37.226392 6155 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7c3a45123f804700b0747994b958cf8c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:37.240015 6155 tablet_copy_client.cc:806] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: tablet copy: Starting download of 0 data blocks...
I20250812 01:53:37.240545 6155 tablet_copy_client.cc:670] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: tablet copy: Starting download of 1 WAL segments...
I20250812 01:53:37.244555 6155 tablet_copy_client.cc:538] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250812 01:53:37.250063 6155 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Bootstrap starting.
I20250812 01:53:37.255404 6152 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:37.255985 6152 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Bootstrap complete.
I20250812 01:53:37.256407 6152 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Time spent bootstrapping tablet: real 0.077s user 0.060s sys 0.016s
I20250812 01:53:37.257990 6152 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:37.258455 6152 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Initialized, Role: LEARNER
I20250812 01:53:37.258845 6152 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:37.260390 6152 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Time spent starting tablet: real 0.004s user 0.000s sys 0.000s
I20250812 01:53:37.262915 6102 tablet_copy_service.cc:342] P 24bab58009374a4f9c7794a5c6b58664: Request end of tablet copy session 8b67450f926a4305baa2491c3514ea10-0befab116793421e8a070f9325978c9e received from {username='slave'} at 127.2.74.66:46357
I20250812 01:53:37.263362 6102 tablet_copy_service.cc:434] P 24bab58009374a4f9c7794a5c6b58664: ending tablet copy session 8b67450f926a4305baa2491c3514ea10-0befab116793421e8a070f9325978c9e on tablet 0befab116793421e8a070f9325978c9e with peer 8b67450f926a4305baa2491c3514ea10
I20250812 01:53:37.276134 6158 ts_tablet_manager.cc:927] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Initiating tablet copy from peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:37.277602 6158 tablet_copy_client.cc:323] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Beginning tablet copy session from remote peer at address 127.2.74.67:35519
I20250812 01:53:37.287875 6102 tablet_copy_service.cc:140] P 24bab58009374a4f9c7794a5c6b58664: Received BeginTabletCopySession request for tablet 0befab116793421e8a070f9325978c9e from peer 75adb7b24ff64a85957dcaf4bdd728d1 ({username='slave'} at 127.2.74.65:57573)
I20250812 01:53:37.288285 6102 tablet_copy_service.cc:161] P 24bab58009374a4f9c7794a5c6b58664: Beginning new tablet copy session on tablet 0befab116793421e8a070f9325978c9e from peer 75adb7b24ff64a85957dcaf4bdd728d1 at {username='slave'} at 127.2.74.65:57573: session id = 75adb7b24ff64a85957dcaf4bdd728d1-0befab116793421e8a070f9325978c9e
I20250812 01:53:37.292327 6102 tablet_copy_source_session.cc:215] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Tablet Copy: opened 0 blocks and 1 log segments
I20250812 01:53:37.294680 6158 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0befab116793421e8a070f9325978c9e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:53:37.304426 6158 tablet_copy_client.cc:806] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Starting download of 0 data blocks...
I20250812 01:53:37.304948 6158 tablet_copy_client.cc:670] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Starting download of 1 WAL segments...
I20250812 01:53:37.308642 6158 tablet_copy_client.cc:538] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250812 01:53:37.314278 6158 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:37.348354 6155 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:37.349004 6155 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Bootstrap complete.
I20250812 01:53:37.349447 6155 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Time spent bootstrapping tablet: real 0.100s user 0.083s sys 0.016s
I20250812 01:53:37.351467 6155 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: true } }
I20250812 01:53:37.351930 6155 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Initialized, Role: LEARNER
I20250812 01:53:37.352347 6155 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: NON_VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: true } }
I20250812 01:53:37.353754 6155 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250812 01:53:37.355383 5940 tablet_copy_service.cc:342] P 8b67450f926a4305baa2491c3514ea10: Request end of tablet copy session 24bab58009374a4f9c7794a5c6b58664-7c3a45123f804700b0747994b958cf8c received from {username='slave'} at 127.2.74.67:45489
I20250812 01:53:37.355697 5940 tablet_copy_service.cc:434] P 8b67450f926a4305baa2491c3514ea10: ending tablet copy session 24bab58009374a4f9c7794a5c6b58664-7c3a45123f804700b0747994b958cf8c on tablet 7c3a45123f804700b0747994b958cf8c with peer 24bab58009374a4f9c7794a5c6b58664
I20250812 01:53:37.389199 6158 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:37.389820 6158 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap complete.
I20250812 01:53:37.390265 6158 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.076s user 0.075s sys 0.001s
I20250812 01:53:37.391788 6158 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:37.392256 6158 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: LEARNER
I20250812 01:53:37.392684 6158 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: NON_VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: true } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:37.395197 6158 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.005s user 0.006s sys 0.000s
I20250812 01:53:37.396683 6102 tablet_copy_service.cc:342] P 24bab58009374a4f9c7794a5c6b58664: Request end of tablet copy session 75adb7b24ff64a85957dcaf4bdd728d1-0befab116793421e8a070f9325978c9e received from {username='slave'} at 127.2.74.65:57573
I20250812 01:53:37.397023 6102 tablet_copy_service.cc:434] P 24bab58009374a4f9c7794a5c6b58664: ending tablet copy session 75adb7b24ff64a85957dcaf4bdd728d1-0befab116793421e8a070f9325978c9e on tablet 0befab116793421e8a070f9325978c9e with peer 75adb7b24ff64a85957dcaf4bdd728d1
I20250812 01:53:37.611585 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 75adb7b24ff64a85957dcaf4bdd728d1 to finish bootstrapping
I20250812 01:53:37.624785 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 8b67450f926a4305baa2491c3514ea10 to finish bootstrapping
I20250812 01:53:37.636823 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 24bab58009374a4f9c7794a5c6b58664 to finish bootstrapping
I20250812 01:53:37.684923 6082 raft_consensus.cc:1215] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12] Dedup: 2.12->[]
I20250812 01:53:37.736290 5775 raft_consensus.cc:1215] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250812 01:53:37.760349 5920 raft_consensus.cc:1215] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250812 01:53:37.912457 6062 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:53:37.916076 5755 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:53:37.918447 5900 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:53:38.126418 6175 raft_consensus.cc:1062] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: attempting to promote NON_VOTER 24bab58009374a4f9c7794a5c6b58664 to VOTER
I20250812 01:53:38.128388 6175 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:38.141099 6082 raft_consensus.cc:1273] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEARNER]: Refusing update from remote peer 8b67450f926a4305baa2491c3514ea10: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250812 01:53:38.142396 5775 raft_consensus.cc:1273] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Refusing update from remote peer 8b67450f926a4305baa2491c3514ea10: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250812 01:53:38.143973 6174 consensus_queue.cc:1035] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Connected to new peer: Peer: permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.001s
I20250812 01:53:38.145102 6133 consensus_queue.cc:1035] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [LEADER]: Connected to new peer: Peer: permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250812 01:53:38.164139 6174 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } } }
I20250812 01:53:38.170816 5775 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } } }
Master Summary
UUID | Address | Status
----------------------------------+--------------------+---------
427d60813b5b4e92afb85a2eb2ec2521 | 127.2.74.126:33421 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
I20250812 01:53:38.183750 5631 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: config changed from index 12 to 13, 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "8b67450f926a4305baa2491c3514ea10" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
Flag | Value | Master
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.74.84:42607 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
24bab58009374a4f9c7794a5c6b58664 | 127.2.74.67:35519 | HEALTHY | <none> | 1 | 0
75adb7b24ff64a85957dcaf4bdd728d1 | 127.2.74.65:39813 | HEALTHY | <none> | 1 | 0
8b67450f926a4305baa2491c3514ea10 | 127.2.74.66:36061 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.2.74.65 | experimental | 127.2.74.65:39813
local_ip_for_outbound_sockets | 127.2.74.66 | experimental | 127.2.74.66:36061
local_ip_for_outbound_sockets | 127.2.74.67 | experimental | 127.2.74.67:35519
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.2.74.65:39813
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.2.74.66:36061
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.2.74.67:35519
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.74.84:42607 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
I20250812 01:53:38.191722 6082 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } } }
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250812 01:53:38.203291 2345 log_verifier.cc:126] Checking tablet 0befab116793421e8a070f9325978c9e
I20250812 01:53:38.295658 2345 log_verifier.cc:177] Verified matching terms for 9 ops in tablet 0befab116793421e8a070f9325978c9e
I20250812 01:53:38.296001 2345 log_verifier.cc:126] Checking tablet 5a886d1478434d1fa5960c6fad954082
I20250812 01:53:38.323045 2345 log_verifier.cc:177] Verified matching terms for 7 ops in tablet 5a886d1478434d1fa5960c6fad954082
I20250812 01:53:38.323292 2345 log_verifier.cc:126] Checking tablet 7c3a45123f804700b0747994b958cf8c
I20250812 01:53:38.326040 6177 raft_consensus.cc:1062] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: attempting to promote NON_VOTER 8b67450f926a4305baa2491c3514ea10 to VOTER
I20250812 01:53:38.327903 6177 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } }
I20250812 01:53:38.333478 5775 raft_consensus.cc:1273] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Refusing update from remote peer 24bab58009374a4f9c7794a5c6b58664: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250812 01:53:38.334756 5920 raft_consensus.cc:1273] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 LEARNER]: Refusing update from remote peer 24bab58009374a4f9c7794a5c6b58664: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250812 01:53:38.335121 6204 consensus_queue.cc:1035] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Connected to new peer: Peer: permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.001s
I20250812 01:53:38.336784 6177 consensus_queue.cc:1035] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250812 01:53:38.341774 6177 raft_consensus.cc:1025] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEADER]: attempt to promote peer 75adb7b24ff64a85957dcaf4bdd728d1: there is already a config change operation in progress. Unable to promote follower until it completes. Doing nothing.
I20250812 01:53:38.347154 6204 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } } }
I20250812 01:53:38.348955 5920 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } } }
I20250812 01:53:38.356385 5775 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } } }
I20250812 01:53:38.359525 5632 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 reported cstate change: config changed from index 9 to 10, 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "24bab58009374a4f9c7794a5c6b58664" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: NON_VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: true } health_report { overall_health: HEALTHY } } }
I20250812 01:53:38.368511 6209 raft_consensus.cc:1062] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: attempting to promote NON_VOTER 75adb7b24ff64a85957dcaf4bdd728d1 to VOTER
I20250812 01:53:38.370641 6209 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:38.376332 5775 raft_consensus.cc:1273] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 LEARNER]: Refusing update from remote peer 24bab58009374a4f9c7794a5c6b58664: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250812 01:53:38.376405 5920 raft_consensus.cc:1273] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Refusing update from remote peer 24bab58009374a4f9c7794a5c6b58664: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250812 01:53:38.377784 6204 consensus_queue.cc:1035] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250812 01:53:38.379451 6209 consensus_queue.cc:1035] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [LEADER]: Connected to new peer: Peer: permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250812 01:53:38.385051 6177 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:38.386540 5920 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:38.388864 5774 raft_consensus.cc:2953] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:38.399885 5631 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 reported cstate change: config changed from index 10 to 11, 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "24bab58009374a4f9c7794a5c6b58664" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250812 01:53:38.437534 2345 log_verifier.cc:177] Verified matching terms for 13 ops in tablet 7c3a45123f804700b0747994b958cf8c
I20250812 01:53:38.437944 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5600
I20250812 01:53:38.464105 2345 minidump.cc:252] Setting minidump size limit to 20M
I20250812 01:53:38.465477 2345 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:38.466796 2345 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:38.479009 6212 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:38.481299 6215 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:38.479166 6213 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:38.578860 2345 server_base.cc:1047] running on GCE node
I20250812 01:53:38.580019 2345 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250812 01:53:38.580233 2345 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250812 01:53:38.580399 2345 hybrid_clock.cc:648] HybridClock initialized: now 1754963618580380 us; error 0 us; skew 500 ppm
I20250812 01:53:38.581053 2345 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:38.584133 2345 webserver.cc:489] Webserver started at http://0.0.0.0:38589/ using document root <none> and password file <none>
I20250812 01:53:38.584992 2345 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:38.585176 2345 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:38.590297 2345 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:53:38.593894 6220 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:38.594797 2345 fs_manager.cc:730] Time spent opening block manager: real 0.002s user 0.004s sys 0.000s
I20250812 01:53:38.595115 2345 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "427d60813b5b4e92afb85a2eb2ec2521"
format_stamp: "Formatted at 2025-08-12 01:53:14 on dist-test-slave-3nxt"
I20250812 01:53:38.596837 2345 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:38.611853 2345 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:38.613291 2345 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:38.613723 2345 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:38.622023 2345 sys_catalog.cc:263] Verifying existing consensus state
W20250812 01:53:38.625478 2345 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.2.74.126:33421 exists but no master address supplied!
I20250812 01:53:38.627447 2345 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap starting.
I20250812 01:53:38.668758 2345 log.cc:826] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:38.730036 2345 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap replayed 1/1 log segments. Stats: ops{read=30 overwritten=0 applied=30 ignored=0} inserts{seen=13 ignored=0} mutations{seen=21 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:38.730800 2345 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap complete.
I20250812 01:53:38.744203 2345 raft_consensus.cc:357] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:38.744824 2345 raft_consensus.cc:738] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Initialized, Role: FOLLOWER
I20250812 01:53:38.745532 2345 consensus_queue.cc:260] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:38.745990 2345 raft_consensus.cc:397] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:38.746228 2345 raft_consensus.cc:491] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:38.746552 2345 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 3 FOLLOWER]: Advancing to term 4
I20250812 01:53:38.751817 2345 raft_consensus.cc:513] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:38.752560 2345 leader_election.cc:304] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 427d60813b5b4e92afb85a2eb2ec2521; no voters:
I20250812 01:53:38.753824 2345 leader_election.cc:290] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 4 election: Requested vote from peers
I20250812 01:53:38.754236 6227 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 4 FOLLOWER]: Leader election won for term 4
I20250812 01:53:38.755710 6227 raft_consensus.cc:695] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 4 LEADER]: Becoming Leader. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Running, Role: LEADER
I20250812 01:53:38.756522 6227 consensus_queue.cc:237] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 30, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:38.764258 6229 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 427d60813b5b4e92afb85a2eb2ec2521. Latest consensus state: current_term: 4 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:38.764834 6229 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:38.765648 6228 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:38.766173 6228 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:38.794530 2345 tablet_replica.cc:331] stopping tablet replica
I20250812 01:53:38.795102 2345 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 4 LEADER]: Raft consensus shutting down.
I20250812 01:53:38.795517 2345 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 4 FOLLOWER]: Raft consensus is shut down!
I20250812 01:53:38.797684 2345 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250812 01:53:38.798203 2345 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250812 01:53:38.822158 2345 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
W20250812 01:53:39.419845 5965 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:33421 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:33421: connect: Connection refused (error 111)
W20250812 01:53:39.438067 6127 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:33421 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:33421: connect: Connection refused (error 111)
W20250812 01:53:39.443199 5820 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:33421 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:33421: connect: Connection refused (error 111)
I20250812 01:53:44.046186 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5670
I20250812 01:53:44.073932 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5827
W20250812 01:53:44.100792 6016 connection.cc:537] server connection from 127.2.74.66:46357 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250812 01:53:44.101356 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 5972
I20250812 01:53:44.133965 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:33421
--webserver_interface=127.2.74.126
--webserver_port=43517
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:33421 with env {}
W20250812 01:53:44.441319 6300 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:44.441938 6300 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:44.442414 6300 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:44.474213 6300 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:53:44.474550 6300 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:44.474838 6300 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:53:44.475086 6300 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:53:44.510867 6300 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:33421
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:33421
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=43517
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:44.512180 6300 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:44.513785 6300 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:44.523800 6306 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:44.525938 6307 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:44.529995 6309 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:45.746837 6308 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1218 milliseconds
I20250812 01:53:45.746950 6300 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:45.748167 6300 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:45.750746 6300 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:45.752082 6300 hybrid_clock.cc:648] HybridClock initialized: now 1754963625752039 us; error 55 us; skew 500 ppm
I20250812 01:53:45.752926 6300 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:45.758975 6300 webserver.cc:489] Webserver started at http://127.2.74.126:43517/ using document root <none> and password file <none>
I20250812 01:53:45.759902 6300 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:45.760119 6300 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:45.767941 6300 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250812 01:53:45.772578 6316 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:45.773674 6300 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250812 01:53:45.774024 6300 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "427d60813b5b4e92afb85a2eb2ec2521"
format_stamp: "Formatted at 2025-08-12 01:53:14 on dist-test-slave-3nxt"
I20250812 01:53:45.775974 6300 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:45.828166 6300 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:45.829663 6300 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:45.830097 6300 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:45.900391 6300 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:33421
I20250812 01:53:45.900444 6367 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:33421 every 8 connection(s)
I20250812 01:53:45.903225 6300 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:53:45.910970 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 6300
I20250812 01:53:45.912402 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:39813
--local_ip_for_outbound_sockets=127.2.74.65
--tserver_master_addrs=127.2.74.126:33421
--webserver_port=37175
--webserver_interface=127.2.74.65
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:45.913448 6368 sys_catalog.cc:263] Verifying existing consensus state
I20250812 01:53:45.918480 6368 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap starting.
I20250812 01:53:45.933135 6368 log.cc:826] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:46.024817 6368 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap replayed 1/1 log segments. Stats: ops{read=34 overwritten=0 applied=34 ignored=0} inserts{seen=15 ignored=0} mutations{seen=23 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:46.025612 6368 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Bootstrap complete.
I20250812 01:53:46.044206 6368 raft_consensus.cc:357] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:46.046403 6368 raft_consensus.cc:738] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Initialized, Role: FOLLOWER
I20250812 01:53:46.047222 6368 consensus_queue.cc:260] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:46.047712 6368 raft_consensus.cc:397] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:46.047971 6368 raft_consensus.cc:491] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:46.048267 6368 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 5 FOLLOWER]: Advancing to term 6
I20250812 01:53:46.053390 6368 raft_consensus.cc:513] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:46.054049 6368 leader_election.cc:304] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 427d60813b5b4e92afb85a2eb2ec2521; no voters:
I20250812 01:53:46.056149 6368 leader_election.cc:290] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [CANDIDATE]: Term 6 election: Requested vote from peers
I20250812 01:53:46.056564 6372 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 6 FOLLOWER]: Leader election won for term 6
I20250812 01:53:46.059805 6372 raft_consensus.cc:695] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [term 6 LEADER]: Becoming Leader. State: Replica: 427d60813b5b4e92afb85a2eb2ec2521, State: Running, Role: LEADER
I20250812 01:53:46.060741 6372 consensus_queue.cc:237] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 34, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } }
I20250812 01:53:46.061311 6368 sys_catalog.cc:564] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:53:46.071652 6374 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 427d60813b5b4e92afb85a2eb2ec2521. Latest consensus state: current_term: 6 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:46.073388 6373 sys_catalog.cc:455] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "427d60813b5b4e92afb85a2eb2ec2521" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "427d60813b5b4e92afb85a2eb2ec2521" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 33421 } } }
I20250812 01:53:46.074265 6373 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:46.076417 6374 sys_catalog.cc:458] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521 [sys.catalog]: This master's current role is: LEADER
I20250812 01:53:46.088565 6380 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:53:46.103401 6380 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=184467c7bd934ef09a88bd6b257ef354]
I20250812 01:53:46.105273 6380 catalog_manager.cc:671] Loaded metadata for table TestTable [id=a8c573d1b60143559609d6f16fcfc021]
I20250812 01:53:46.106909 6380 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=ad8d0e50b8e84bef849de7defdd7858f]
I20250812 01:53:46.116226 6380 tablet_loader.cc:96] loaded metadata for tablet 0befab116793421e8a070f9325978c9e (table TestTable1 [id=ad8d0e50b8e84bef849de7defdd7858f])
I20250812 01:53:46.119427 6380 tablet_loader.cc:96] loaded metadata for tablet 5a886d1478434d1fa5960c6fad954082 (table TestTable2 [id=184467c7bd934ef09a88bd6b257ef354])
I20250812 01:53:46.121122 6380 tablet_loader.cc:96] loaded metadata for tablet 7c3a45123f804700b0747994b958cf8c (table TestTable [id=a8c573d1b60143559609d6f16fcfc021])
I20250812 01:53:46.122696 6380 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:53:46.128751 6380 catalog_manager.cc:1261] Loaded cluster ID: bec667d7ad814733a500bc3be06ede07
I20250812 01:53:46.129125 6380 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:53:46.139463 6380 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:53:46.145536 6380 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 427d60813b5b4e92afb85a2eb2ec2521: Loaded TSK: 0
I20250812 01:53:46.147280 6380 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250812 01:53:46.289487 6370 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:46.290019 6370 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:46.290524 6370 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:46.322759 6370 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:46.323634 6370 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:53:46.359999 6370 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:39813
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=37175
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:46.361399 6370 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:46.363044 6370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:46.375887 6395 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:46.377701 6396 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:47.851222 6398 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:47.854364 6397 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1473 milliseconds
I20250812 01:53:47.854453 6370 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:47.855643 6370 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:47.859505 6370 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:47.860937 6370 hybrid_clock.cc:648] HybridClock initialized: now 1754963627860891 us; error 81 us; skew 500 ppm
I20250812 01:53:47.861739 6370 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:47.869144 6370 webserver.cc:489] Webserver started at http://127.2.74.65:37175/ using document root <none> and password file <none>
I20250812 01:53:47.870080 6370 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:47.870293 6370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:47.878525 6370 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.004s sys 0.001s
I20250812 01:53:47.883901 6405 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:47.885044 6370 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250812 01:53:47.885370 6370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "75adb7b24ff64a85957dcaf4bdd728d1"
format_stamp: "Formatted at 2025-08-12 01:53:17 on dist-test-slave-3nxt"
I20250812 01:53:47.887276 6370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:47.954859 6370 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:47.956351 6370 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:47.956821 6370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:47.959951 6370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:47.967121 6412 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250812 01:53:47.984483 6370 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250812 01:53:47.984870 6370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.020s user 0.000s sys 0.002s
I20250812 01:53:47.985211 6370 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250812 01:53:47.993360 6412 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:48.003723 6370 ts_tablet_manager.cc:610] Registered 3 tablets
I20250812 01:53:48.004021 6370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.019s user 0.012s sys 0.004s
I20250812 01:53:48.049292 6412 log.cc:826] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:48.203464 6412 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:48.204753 6370 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:39813
I20250812 01:53:48.204828 6412 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap complete.
I20250812 01:53:48.204913 6519 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:39813 every 8 connection(s)
I20250812 01:53:48.206804 6412 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.214s user 0.163s sys 0.047s
I20250812 01:53:48.208470 6370 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:53:48.213428 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 6370
I20250812 01:53:48.215274 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:36061
--local_ip_for_outbound_sockets=127.2.74.66
--tserver_master_addrs=127.2.74.126:33421
--webserver_port=45969
--webserver_interface=127.2.74.66
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:48.235874 6412 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:48.240444 6412 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: FOLLOWER
I20250812 01:53:48.242019 6412 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:48.259109 6412 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.052s user 0.039s sys 0.012s
I20250812 01:53:48.260025 6412 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:48.263711 6520 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:48.264202 6520 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:48.265636 6520 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:48.271142 6333 ts_manager.cc:194] Registered new tserver with Master: 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65:39813)
I20250812 01:53:48.276445 6333 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: config changed from index -1 to 13, term changed from 0 to 2, VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) added, VOTER 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65) added, VOTER 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) added. New cstate: current_term: 2 committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } } }
I20250812 01:53:48.358556 6333 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:57327
I20250812 01:53:48.363242 6520 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
I20250812 01:53:48.400543 6412 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:48.401516 6412 tablet_bootstrap.cc:492] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap complete.
I20250812 01:53:48.403304 6412 ts_tablet_manager.cc:1397] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.143s user 0.120s sys 0.019s
I20250812 01:53:48.405794 6412 raft_consensus.cc:357] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:48.406329 6412 raft_consensus.cc:738] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: FOLLOWER
I20250812 01:53:48.406939 6412 consensus_queue.cc:260] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:48.407516 6412 raft_consensus.cc:397] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:53:48.407912 6412 raft_consensus.cc:491] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:53:48.408322 6412 raft_consensus.cc:3058] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Advancing to term 3
I20250812 01:53:48.417407 6412 raft_consensus.cc:513] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:48.418388 6412 leader_election.cc:304] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1; no voters:
I20250812 01:53:48.419129 6412 leader_election.cc:290] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250812 01:53:48.419414 6525 raft_consensus.cc:2802] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 FOLLOWER]: Leader election won for term 3
I20250812 01:53:48.430763 6525 raft_consensus.cc:695] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 LEADER]: Becoming Leader. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Running, Role: LEADER
I20250812 01:53:48.431806 6525 consensus_queue.cc:237] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } }
I20250812 01:53:48.438552 6412 ts_tablet_manager.cc:1428] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.035s user 0.024s sys 0.012s
I20250812 01:53:48.439368 6412 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap starting.
I20250812 01:53:48.448371 6332 catalog_manager.cc:5582] T 5a886d1478434d1fa5960c6fad954082 P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } health_report { overall_health: HEALTHY } } }
I20250812 01:53:48.583319 6412 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:48.584208 6412 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Bootstrap complete.
I20250812 01:53:48.585932 6412 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent bootstrapping tablet: real 0.147s user 0.121s sys 0.023s
I20250812 01:53:48.588025 6412 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:48.588724 6412 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Initialized, Role: FOLLOWER
I20250812 01:53:48.589329 6412 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:48.591193 6412 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
W20250812 01:53:48.676702 6524 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:48.677214 6524 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:48.677731 6524 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:48.709076 6524 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:48.709939 6524 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:53:48.747737 6524 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:36061
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=45969
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:48.749130 6524 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:48.750717 6524 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:48.765916 6541 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:49.561472 6547 raft_consensus.cc:491] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:53:49.562145 6547 raft_consensus.cc:513] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:49.566409 6547 leader_election.cc:290] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061), 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
W20250812 01:53:49.580039 6407 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
W20250812 01:53:49.586935 6407 leader_election.cc:336] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
W20250812 01:53:49.587872 6409 leader_election.cc:336] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061): Network error: Client connection negotiation failed: client connection to 127.2.74.66:36061: connect: Connection refused (error 111)
I20250812 01:53:49.588454 6409 leader_election.cc:304] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1; no voters: 24bab58009374a4f9c7794a5c6b58664, 8b67450f926a4305baa2491c3514ea10
I20250812 01:53:49.589931 6547 raft_consensus.cc:2747] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
W20250812 01:53:48.767843 6542 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:48.770872 6544 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:49.998730 6543 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1227 milliseconds
I20250812 01:53:49.998838 6524 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:50.000023 6524 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:50.002204 6524 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:50.003583 6524 hybrid_clock.cc:648] HybridClock initialized: now 1754963630003559 us; error 58 us; skew 500 ppm
I20250812 01:53:50.004428 6524 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:50.010540 6524 webserver.cc:489] Webserver started at http://127.2.74.66:45969/ using document root <none> and password file <none>
I20250812 01:53:50.011499 6524 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:50.011739 6524 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:50.019584 6524 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.000s
I20250812 01:53:50.024611 6555 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:50.025740 6524 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250812 01:53:50.026098 6524 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "8b67450f926a4305baa2491c3514ea10"
format_stamp: "Formatted at 2025-08-12 01:53:19 on dist-test-slave-3nxt"
I20250812 01:53:50.028008 6524 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:50.085325 6524 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:50.086822 6524 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:50.087392 6524 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:50.090059 6524 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:50.095705 6562 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250812 01:53:50.107424 6524 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250812 01:53:50.107707 6524 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.002s sys 0.000s
I20250812 01:53:50.107944 6524 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250812 01:53:50.113382 6562 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap starting.
I20250812 01:53:50.115996 6524 ts_tablet_manager.cc:610] Registered 2 tablets
I20250812 01:53:50.116202 6524 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.005s sys 0.000s
I20250812 01:53:50.171769 6562 log.cc:826] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:50.286700 6562 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:50.287533 6562 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Bootstrap complete.
I20250812 01:53:50.288975 6562 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Time spent bootstrapping tablet: real 0.176s user 0.131s sys 0.043s
I20250812 01:53:50.301373 6524 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:36061
I20250812 01:53:50.301718 6669 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:36061 every 8 connection(s)
I20250812 01:53:50.302762 6670 raft_consensus.cc:491] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:53:50.303205 6670 raft_consensus.cc:513] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:50.304486 6524 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:53:50.305136 6670 leader_election.cc:290] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519), 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:50.309821 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 6524
I20250812 01:53:50.308279 6562 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:50.311693 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:35519
--local_ip_for_outbound_sockets=127.2.74.67
--tserver_master_addrs=127.2.74.126:33421
--webserver_port=38669
--webserver_interface=127.2.74.67
--builtin_ntp_servers=127.2.74.84:42607
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250812 01:53:50.311800 6562 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Initialized, Role: FOLLOWER
I20250812 01:53:50.312884 6562 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:50.324029 6562 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Time spent starting tablet: real 0.035s user 0.023s sys 0.009s
I20250812 01:53:50.325026 6562 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Bootstrap starting.
W20250812 01:53:50.349326 6407 leader_election.cc:336] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
I20250812 01:53:50.359194 6672 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:50.359656 6672 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:50.361151 6672 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:50.364971 6625 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0befab116793421e8a070f9325978c9e" candidate_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "8b67450f926a4305baa2491c3514ea10" is_pre_election: true
W20250812 01:53:50.367327 6409 leader_election.cc:343] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Tablet error from VoteRequest() call to peer 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061): Illegal state: must be running to vote when last-logged opid is not known
I20250812 01:53:50.367810 6409 leader_election.cc:304] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1; no voters: 24bab58009374a4f9c7794a5c6b58664, 8b67450f926a4305baa2491c3514ea10
I20250812 01:53:50.368652 6332 ts_manager.cc:194] Registered new tserver with Master: 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:50.368822 6670 raft_consensus.cc:2747] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250812 01:53:50.372943 6332 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:48575
I20250812 01:53:50.377249 6672 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
I20250812 01:53:50.457415 6562 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:50.458106 6562 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Bootstrap complete.
I20250812 01:53:50.459357 6562 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Time spent bootstrapping tablet: real 0.135s user 0.115s sys 0.017s
I20250812 01:53:50.460995 6562 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:50.461472 6562 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b67450f926a4305baa2491c3514ea10, State: Initialized, Role: FOLLOWER
I20250812 01:53:50.462020 6562 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:50.463640 6562 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10: Time spent starting tablet: real 0.004s user 0.006s sys 0.000s
W20250812 01:53:50.680346 6680 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:53:50.680917 6680 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:53:50.681409 6680 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:53:50.712893 6680 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:53:50.713760 6680 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:53:50.749079 6680 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:42607
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:35519
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=38669
--tserver_master_addrs=127.2.74.126:33421
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:53:50.750381 6680 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:53:50.752071 6680 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:53:50.765337 6687 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:53:51.217756 6693 raft_consensus.cc:491] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:53:51.218225 6693 raft_consensus.cc:513] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:51.220060 6693 leader_election.cc:290] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061), 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:51.222613 6625 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "7c3a45123f804700b0747994b958cf8c" candidate_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "8b67450f926a4305baa2491c3514ea10" is_pre_election: true
I20250812 01:53:51.223548 6625 raft_consensus.cc:2466] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 75adb7b24ff64a85957dcaf4bdd728d1 in term 2.
I20250812 01:53:51.225368 6409 leader_election.cc:304] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1, 8b67450f926a4305baa2491c3514ea10; no voters:
I20250812 01:53:51.227079 6693 raft_consensus.cc:2802] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250812 01:53:51.227551 6693 raft_consensus.cc:491] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:53:51.227960 6693 raft_consensus.cc:3058] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Advancing to term 3
I20250812 01:53:51.236083 6693 raft_consensus.cc:513] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:51.245007 6693 leader_election.cc:290] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: Requested vote from peers 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061), 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:51.246124 6625 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "7c3a45123f804700b0747994b958cf8c" candidate_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "8b67450f926a4305baa2491c3514ea10"
I20250812 01:53:51.246866 6625 raft_consensus.cc:3058] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Advancing to term 3
I20250812 01:53:51.271754 6625 raft_consensus.cc:2466] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 75adb7b24ff64a85957dcaf4bdd728d1 in term 3.
I20250812 01:53:51.278053 6409 leader_election.cc:304] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1, 8b67450f926a4305baa2491c3514ea10; no voters:
I20250812 01:53:51.279035 6693 raft_consensus.cc:2802] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 FOLLOWER]: Leader election won for term 3
W20250812 01:53:51.281941 6407 leader_election.cc:336] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
W20250812 01:53:51.286217 6407 leader_election.cc:336] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
I20250812 01:53:51.290282 6693 raft_consensus.cc:695] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 LEADER]: Becoming Leader. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Running, Role: LEADER
I20250812 01:53:51.291190 6693 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:51.310087 6332 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: term changed from 2 to 3, leader changed from <none> to 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65). New cstate: current_term: 3 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250812 01:53:51.731040 6625 raft_consensus.cc:1273] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Refusing update from remote peer 75adb7b24ff64a85957dcaf4bdd728d1: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 3 index: 14. (index mismatch)
I20250812 01:53:51.733561 6693 consensus_queue.cc:1035] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.001s
W20250812 01:53:51.848193 6407 consensus_peers.cc:489] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 -> Peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Couldn't send request to peer 24bab58009374a4f9c7794a5c6b58664. Status: Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250812 01:53:51.858860 6475 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 3.14, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:51.868518 6624 raft_consensus.cc:1273] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Refusing update from remote peer 75adb7b24ff64a85957dcaf4bdd728d1: Log matching property violated. Preceding OpId in replica: term: 3 index: 14. Preceding OpId from leader: term: 3 index: 15. (index mismatch)
I20250812 01:53:51.870875 6693 consensus_queue.cc:1035] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 15, Last known committed idx: 14, Time since last communication: 0.001s
I20250812 01:53:51.880843 6693 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 LEADER]: Committing config change with OpId 3.15: config changed from index 13 to 15, VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:51.884505 6625 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Committing config change with OpId 3.15: config changed from index 13 to 15, VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:51.906912 6332 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 reported cstate change: config changed from index 13 to 15, VOTER 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) evicted. New cstate: current_term: 3 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:51.922950 6320 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 7c3a45123f804700b0747994b958cf8c with cas_config_opid_index 13: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
W20250812 01:53:51.956362 6332 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 7c3a45123f804700b0747994b958cf8c on TS 24bab58009374a4f9c7794a5c6b58664: Not found: failed to reset TS proxy: Could not find TS for UUID 24bab58009374a4f9c7794a5c6b58664
I20250812 01:53:51.958356 6475 consensus_queue.cc:237] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 15, Committed index: 15, Last appended: 3.15, Last appended by leader: 13, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:51.962600 6693 raft_consensus.cc:2953] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 LEADER]: Committing config change with OpId 3.16: config changed from index 15 to 16, VOTER 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } }
I20250812 01:53:51.978502 6320 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 7c3a45123f804700b0747994b958cf8c with cas_config_opid_index 15: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250812 01:53:51.983105 6333 catalog_manager.cc:5582] T 7c3a45123f804700b0747994b958cf8c P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: config changed from index 15 to 16, VOTER 8b67450f926a4305baa2491c3514ea10 (127.2.74.66) evicted. New cstate: current_term: 3 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250812 01:53:51.984607 6693 raft_consensus.cc:491] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:53:51.985378 6693 raft_consensus.cc:513] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:51.988277 6693 leader_election.cc:290] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519), 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:51.991595 6625 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0befab116793421e8a070f9325978c9e" candidate_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "8b67450f926a4305baa2491c3514ea10" is_pre_election: true
I20250812 01:53:51.992192 6625 raft_consensus.cc:2466] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 75adb7b24ff64a85957dcaf4bdd728d1 in term 2.
I20250812 01:53:51.993444 6409 leader_election.cc:304] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1, 8b67450f926a4305baa2491c3514ea10; no voters:
I20250812 01:53:51.994270 6693 raft_consensus.cc:2802] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250812 01:53:51.994837 6693 raft_consensus.cc:491] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:53:51.995527 6693 raft_consensus.cc:3058] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 2 FOLLOWER]: Advancing to term 3
W20250812 01:53:51.994709 6407 leader_election.cc:336] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
I20250812 01:53:52.005496 6693 raft_consensus.cc:513] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
W20250812 01:53:52.010138 6318 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 7c3a45123f804700b0747994b958cf8c on TS 24bab58009374a4f9c7794a5c6b58664 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 24bab58009374a4f9c7794a5c6b58664
I20250812 01:53:52.013967 6693 leader_election.cc:290] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: Requested vote from peers 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519), 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061)
I20250812 01:53:52.015064 6625 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0befab116793421e8a070f9325978c9e" candidate_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "8b67450f926a4305baa2491c3514ea10"
I20250812 01:53:52.015654 6625 raft_consensus.cc:3058] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 2 FOLLOWER]: Advancing to term 3
I20250812 01:53:52.022645 6625 raft_consensus.cc:2466] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 75adb7b24ff64a85957dcaf4bdd728d1 in term 3.
I20250812 01:53:52.023762 6409 leader_election.cc:304] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 75adb7b24ff64a85957dcaf4bdd728d1, 8b67450f926a4305baa2491c3514ea10; no voters:
I20250812 01:53:52.024633 6693 raft_consensus.cc:2802] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 FOLLOWER]: Leader election won for term 3
I20250812 01:53:52.025235 6693 raft_consensus.cc:695] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [term 3 LEADER]: Becoming Leader. State: Replica: 75adb7b24ff64a85957dcaf4bdd728d1, State: Running, Role: LEADER
I20250812 01:53:52.026065 6693 consensus_queue.cc:237] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
W20250812 01:53:52.033582 6407 leader_election.cc:336] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111)
I20250812 01:53:52.043613 6333 catalog_manager.cc:5582] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 reported cstate change: term changed from 2 to 3, leader changed from 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67) to 75adb7b24ff64a85957dcaf4bdd728d1 (127.2.74.65). New cstate: current_term: 3 leader_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250812 01:53:52.084270 6605 tablet_service.cc:1515] Processing DeleteTablet for tablet 7c3a45123f804700b0747994b958cf8c with delete_type TABLET_DATA_TOMBSTONED (TS 8b67450f926a4305baa2491c3514ea10 not found in new config with opid_index 16) from {username='slave'} at 127.0.0.1:45684
I20250812 01:53:52.101291 6715 tablet_replica.cc:331] stopping tablet replica
I20250812 01:53:52.107431 6715 raft_consensus.cc:2241] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250812 01:53:52.109779 6715 raft_consensus.cc:2270] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250812 01:53:52.122704 6715 ts_tablet_manager.cc:1905] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250812 01:53:52.175261 6715 ts_tablet_manager.cc:1918] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.15
I20250812 01:53:52.176021 6715 log.cc:1199] T 7c3a45123f804700b0747994b958cf8c P 8b67450f926a4305baa2491c3514ea10: Deleting WAL directory at /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/wals/7c3a45123f804700b0747994b958cf8c
I20250812 01:53:52.178597 6320 catalog_manager.cc:4928] TS 8b67450f926a4305baa2491c3514ea10 (127.2.74.66:36061): tablet 7c3a45123f804700b0747994b958cf8c (table TestTable [id=a8c573d1b60143559609d6f16fcfc021]) successfully deleted
I20250812 01:53:52.428058 6625 raft_consensus.cc:1273] T 0befab116793421e8a070f9325978c9e P 8b67450f926a4305baa2491c3514ea10 [term 3 FOLLOWER]: Refusing update from remote peer 75adb7b24ff64a85957dcaf4bdd728d1: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250812 01:53:52.430423 6693 consensus_queue.cc:1035] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
W20250812 01:53:52.486795 6407 consensus_peers.cc:489] T 0befab116793421e8a070f9325978c9e P 75adb7b24ff64a85957dcaf4bdd728d1 -> Peer 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): Couldn't send request to peer 24bab58009374a4f9c7794a5c6b58664. Status: Network error: Client connection negotiation failed: client connection to 127.2.74.67:35519: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250812 01:53:52.169106 6686 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 6680
W20250812 01:53:50.765761 6688 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:52.507175 6680 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.742s user 0.611s sys 1.001s
W20250812 01:53:52.507905 6680 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.743s user 0.611s sys 1.001s
W20250812 01:53:52.509361 6690 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:53:52.513204 6689 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1746 milliseconds
I20250812 01:53:52.513227 6680 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:53:52.514485 6680 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:53:52.516446 6680 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:53:52.517828 6680 hybrid_clock.cc:648] HybridClock initialized: now 1754963632517763 us; error 70 us; skew 500 ppm
I20250812 01:53:52.518635 6680 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:53:52.524710 6680 webserver.cc:489] Webserver started at http://127.2.74.67:38669/ using document root <none> and password file <none>
I20250812 01:53:52.525637 6680 fs_manager.cc:362] Metadata directory not provided
I20250812 01:53:52.525861 6680 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:53:52.533789 6680 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.001s
I20250812 01:53:52.538931 6724 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:53:52.540208 6680 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250812 01:53:52.540545 6680 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "24bab58009374a4f9c7794a5c6b58664"
format_stamp: "Formatted at 2025-08-12 01:53:21 on dist-test-slave-3nxt"
I20250812 01:53:52.542611 6680 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:53:52.599225 6680 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:53:52.600759 6680 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:53:52.601192 6680 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:53:52.603788 6680 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:53:52.609508 6731 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250812 01:53:52.621295 6680 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250812 01:53:52.621582 6680 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.014s user 0.000s sys 0.002s
I20250812 01:53:52.621886 6680 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250812 01:53:52.627249 6731 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Bootstrap starting.
I20250812 01:53:52.629998 6680 ts_tablet_manager.cc:610] Registered 2 tablets
I20250812 01:53:52.630213 6680 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.007s sys 0.000s
I20250812 01:53:52.682901 6731 log.cc:826] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Log is configured to *not* fsync() on all Append() calls
I20250812 01:53:52.795091 6731 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:52.796001 6731 tablet_bootstrap.cc:492] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Bootstrap complete.
I20250812 01:53:52.797386 6731 ts_tablet_manager.cc:1397] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Time spent bootstrapping tablet: real 0.170s user 0.133s sys 0.036s
I20250812 01:53:52.806635 6680 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:35519
I20250812 01:53:52.806838 6838 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:35519 every 8 connection(s)
I20250812 01:53:52.809226 6680 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:53:52.813318 6731 raft_consensus.cc:357] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:52.816794 6731 raft_consensus.cc:738] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Initialized, Role: FOLLOWER
I20250812 01:53:52.817687 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 6680
I20250812 01:53:52.817749 6731 consensus_queue.cc:260] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } } peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } attrs { promote: false } }
I20250812 01:53:52.822204 6731 ts_tablet_manager.cc:1428] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Time spent starting tablet: real 0.024s user 0.019s sys 0.006s
I20250812 01:53:52.823062 6731 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap starting.
I20250812 01:53:52.833413 6839 heartbeater.cc:344] Connected to a master server at 127.2.74.126:33421
I20250812 01:53:52.833932 6839 heartbeater.cc:461] Registering TS with master...
I20250812 01:53:52.835114 6839 heartbeater.cc:507] Master 127.2.74.126:33421 requested a full tablet report, sending...
I20250812 01:53:52.839648 6333 ts_manager.cc:194] Registered new tserver with Master: 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519)
I20250812 01:53:52.843513 6333 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:59443
I20250812 01:53:52.846652 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:53:52.848001 6839 heartbeater.cc:499] Master 127.2.74.126:33421 was elected leader, sending a full tablet report...
I20250812 01:53:52.851801 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250812 01:53:52.855113 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250812 01:53:52.930066 6731 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:53:52.930953 6731 tablet_bootstrap.cc:492] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Bootstrap complete.
I20250812 01:53:52.932631 6731 ts_tablet_manager.cc:1397] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Time spent bootstrapping tablet: real 0.110s user 0.093s sys 0.012s
I20250812 01:53:52.934749 6731 raft_consensus.cc:357] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:52.935356 6731 raft_consensus.cc:738] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 24bab58009374a4f9c7794a5c6b58664, State: Initialized, Role: FOLLOWER
I20250812 01:53:52.935914 6731 consensus_queue.cc:260] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "24bab58009374a4f9c7794a5c6b58664" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 35519 } } peers { permanent_uuid: "8b67450f926a4305baa2491c3514ea10" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 36061 } attrs { promote: false } } peers { permanent_uuid: "75adb7b24ff64a85957dcaf4bdd728d1" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 39813 } attrs { promote: false } }
I20250812 01:53:52.937799 6731 ts_tablet_manager.cc:1428] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
I20250812 01:53:53.039891 6794 raft_consensus.cc:3058] T 0befab116793421e8a070f9325978c9e P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Advancing to term 3
I20250812 01:53:53.187062 6774 tablet_service.cc:1515] Processing DeleteTablet for tablet 7c3a45123f804700b0747994b958cf8c with delete_type TABLET_DATA_TOMBSTONED (TS 24bab58009374a4f9c7794a5c6b58664 not found in new config with opid_index 15) from {username='slave'} at 127.0.0.1:51348
I20250812 01:53:53.192759 6852 tablet_replica.cc:331] stopping tablet replica
I20250812 01:53:53.193641 6852 raft_consensus.cc:2241] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250812 01:53:53.194214 6852 raft_consensus.cc:2270] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250812 01:53:53.197819 6852 ts_tablet_manager.cc:1905] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250812 01:53:53.207618 6852 ts_tablet_manager.cc:1918] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.13
I20250812 01:53:53.207975 6852 log.cc:1199] T 7c3a45123f804700b0747994b958cf8c P 24bab58009374a4f9c7794a5c6b58664: Deleting WAL directory at /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/wals/7c3a45123f804700b0747994b958cf8c
I20250812 01:53:53.209530 6318 catalog_manager.cc:4928] TS 24bab58009374a4f9c7794a5c6b58664 (127.2.74.67:35519): tablet 7c3a45123f804700b0747994b958cf8c (table TestTable [id=a8c573d1b60143559609d6f16fcfc021]) successfully deleted
W20250812 01:53:53.859561 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:53:54.863790 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:53:55.867199 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:53:56.870510 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:53:57.873929 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:53:58.877637 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:53:59.881093 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:00.884289 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:01.628299 6666 debug-util.cc:398] Leaking SignalData structure 0x7b08000c54a0 after lost signal to thread 6536
W20250812 01:54:01.629202 6666 debug-util.cc:398] Leaking SignalData structure 0x7b08000c50e0 after lost signal to thread 6669
W20250812 01:54:01.887781 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:02.891547 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:03.895633 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:04.899530 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:05.903335 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:06.906647 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:07.910002 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:08.913270 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:09.916493 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:10.919831 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250812 01:54:11.923053 2345 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 7c3a45123f804700b0747994b958cf8c: tablet_id: "7c3a45123f804700b0747994b958cf8c" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250812 01:54:12.926119 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 6370
I20250812 01:54:12.953416 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 6524
I20250812 01:54:12.981170 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 6680
I20250812 01:54:13.008956 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 6300
2025-08-12T01:54:13Z chronyd exiting
I20250812 01:54:13.059604 2345 test_util.cc:183] -----------------------------------------------
I20250812 01:54:13.059809 2345 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754963523148890-2345-0
[ FAILED ] AdminCliTest.TestRebuildTables (59938 ms)
[----------] 5 tests from AdminCliTest (129839 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250812 01:54:13.063728 2345 test_util.cc:276] Using random seed: 1361212434
I20250812 01:54:13.068521 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:54:13.068742 2345 ts_itest-base.cc:116] --------------
I20250812 01:54:13.068861 2345 ts_itest-base.cc:117] 5 tablet servers
I20250812 01:54:13.068969 2345 ts_itest-base.cc:118] 3 replicas per TS
I20250812 01:54:13.069087 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:54:13Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:54:13Z Disabled control of system clock
I20250812 01:54:13.113871 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:45231
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:45879
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:45231
--raft_prepare_replacement_before_eviction=true with env {}
W20250812 01:54:13.416548 6874 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:13.417188 6874 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:13.417752 6874 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:13.449371 6874 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250812 01:54:13.449781 6874 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:54:13.450047 6874 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:13.450289 6874 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:54:13.450516 6874 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:54:13.486106 6874 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:45879
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:45231
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:45231
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:13.487413 6874 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:13.489032 6874 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:13.499362 6880 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:13.504096 6883 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:13.500727 6881 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:14.690341 6882 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1186 milliseconds
I20250812 01:54:14.690474 6874 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:14.691715 6874 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:14.694272 6874 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:14.695596 6874 hybrid_clock.cc:648] HybridClock initialized: now 1754963654695561 us; error 58 us; skew 500 ppm
I20250812 01:54:14.696363 6874 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:14.702913 6874 webserver.cc:489] Webserver started at http://127.2.74.126:37823/ using document root <none> and password file <none>
I20250812 01:54:14.703821 6874 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:14.704034 6874 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:14.704489 6874 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:14.708930 6874 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "9d9ad5ec65404202a4812db7bff5bf36"
format_stamp: "Formatted at 2025-08-12 01:54:14 on dist-test-slave-3nxt"
I20250812 01:54:14.710027 6874 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "9d9ad5ec65404202a4812db7bff5bf36"
format_stamp: "Formatted at 2025-08-12 01:54:14 on dist-test-slave-3nxt"
I20250812 01:54:14.717406 6874 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250812 01:54:14.722998 6891 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:14.723995 6874 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250812 01:54:14.724306 6874 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "9d9ad5ec65404202a4812db7bff5bf36"
format_stamp: "Formatted at 2025-08-12 01:54:14 on dist-test-slave-3nxt"
I20250812 01:54:14.724653 6874 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:14.774801 6874 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:14.776253 6874 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:14.776717 6874 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:14.846580 6874 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:45231
I20250812 01:54:14.846657 6942 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:45231 every 8 connection(s)
I20250812 01:54:14.849277 6874 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:54:14.854396 6943 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:14.856318 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 6874
I20250812 01:54:14.856914 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:54:14.875023 6943 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36: Bootstrap starting.
I20250812 01:54:14.880827 6943 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:14.882680 6943 log.cc:826] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:14.887322 6943 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36: No bootstrap required, opened a new log
I20250812 01:54:14.903869 6943 raft_consensus.cc:357] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9d9ad5ec65404202a4812db7bff5bf36" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 45231 } }
I20250812 01:54:14.904520 6943 raft_consensus.cc:383] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:14.904794 6943 raft_consensus.cc:738] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9d9ad5ec65404202a4812db7bff5bf36, State: Initialized, Role: FOLLOWER
I20250812 01:54:14.905467 6943 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9d9ad5ec65404202a4812db7bff5bf36" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 45231 } }
I20250812 01:54:14.905941 6943 raft_consensus.cc:397] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:54:14.906217 6943 raft_consensus.cc:491] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:54:14.906539 6943 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:14.910470 6943 raft_consensus.cc:513] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9d9ad5ec65404202a4812db7bff5bf36" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 45231 } }
I20250812 01:54:14.911139 6943 leader_election.cc:304] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9d9ad5ec65404202a4812db7bff5bf36; no voters:
I20250812 01:54:14.912762 6943 leader_election.cc:290] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:54:14.913498 6948 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:14.915598 6948 raft_consensus.cc:695] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [term 1 LEADER]: Becoming Leader. State: Replica: 9d9ad5ec65404202a4812db7bff5bf36, State: Running, Role: LEADER
I20250812 01:54:14.916404 6948 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9d9ad5ec65404202a4812db7bff5bf36" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 45231 } }
I20250812 01:54:14.917462 6943 sys_catalog.cc:564] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:54:14.926548 6950 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9d9ad5ec65404202a4812db7bff5bf36. Latest consensus state: current_term: 1 leader_uuid: "9d9ad5ec65404202a4812db7bff5bf36" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9d9ad5ec65404202a4812db7bff5bf36" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 45231 } } }
I20250812 01:54:14.926810 6949 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "9d9ad5ec65404202a4812db7bff5bf36" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9d9ad5ec65404202a4812db7bff5bf36" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 45231 } } }
I20250812 01:54:14.927503 6949 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:14.928682 6950 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36 [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:14.931262 6956 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:54:14.944713 6956 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:54:14.959972 6956 catalog_manager.cc:1349] Generated new cluster ID: 4c6133bc260244d9934c94198581ddee
I20250812 01:54:14.960299 6956 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:54:14.974809 6956 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:54:14.976866 6956 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:54:15.001757 6956 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 9d9ad5ec65404202a4812db7bff5bf36: Generated new TSK 0
I20250812 01:54:15.002624 6956 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:54:15.017081 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--builtin_ntp_servers=127.2.74.84:45879
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250812 01:54:15.349689 6967 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:15.350222 6967 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:15.350713 6967 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:15.382529 6967 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250812 01:54:15.382943 6967 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:15.383710 6967 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:54:15.418397 6967 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:45879
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:15.419720 6967 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:15.421430 6967 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:15.433830 6973 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:16.837265 6972 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6967
W20250812 01:54:15.435473 6974 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:16.966742 6967 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.532s user 0.528s sys 0.998s
W20250812 01:54:16.968850 6975 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1533 milliseconds
W20250812 01:54:16.968861 6976 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:16.969259 6967 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.534s user 0.529s sys 0.998s
I20250812 01:54:16.969584 6967 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:16.973443 6967 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:16.975556 6967 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:16.976905 6967 hybrid_clock.cc:648] HybridClock initialized: now 1754963656976866 us; error 59 us; skew 500 ppm
I20250812 01:54:16.977697 6967 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:16.983940 6967 webserver.cc:489] Webserver started at http://127.2.74.65:35911/ using document root <none> and password file <none>
I20250812 01:54:16.984931 6967 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:16.985154 6967 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:16.985587 6967 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:16.989998 6967 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "b7f96e5543dd4e46a74be37e73eb6582"
format_stamp: "Formatted at 2025-08-12 01:54:16 on dist-test-slave-3nxt"
I20250812 01:54:16.991106 6967 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "b7f96e5543dd4e46a74be37e73eb6582"
format_stamp: "Formatted at 2025-08-12 01:54:16 on dist-test-slave-3nxt"
I20250812 01:54:16.998412 6967 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.004s
I20250812 01:54:17.004285 6983 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:17.005412 6967 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250812 01:54:17.005725 6967 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "b7f96e5543dd4e46a74be37e73eb6582"
format_stamp: "Formatted at 2025-08-12 01:54:16 on dist-test-slave-3nxt"
I20250812 01:54:17.006071 6967 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:17.058264 6967 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:17.059795 6967 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:17.060236 6967 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:17.062858 6967 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:17.067370 6967 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:17.067584 6967 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:17.067828 6967 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:17.067986 6967 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:17.236972 6967 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:36087
I20250812 01:54:17.237092 7095 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:36087 every 8 connection(s)
I20250812 01:54:17.239625 6967 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:54:17.248993 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 6967
I20250812 01:54:17.249471 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:54:17.257135 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--builtin_ntp_servers=127.2.74.84:45879
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250812 01:54:17.263526 7096 heartbeater.cc:344] Connected to a master server at 127.2.74.126:45231
I20250812 01:54:17.264079 7096 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:17.265487 7096 heartbeater.cc:507] Master 127.2.74.126:45231 requested a full tablet report, sending...
I20250812 01:54:17.268841 6908 ts_manager.cc:194] Registered new tserver with Master: b7f96e5543dd4e46a74be37e73eb6582 (127.2.74.65:36087)
I20250812 01:54:17.270823 6908 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:40443
W20250812 01:54:17.579382 7100 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:17.579887 7100 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:17.580377 7100 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:17.611271 7100 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250812 01:54:17.611697 7100 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:17.612502 7100 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:54:17.646894 7100 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:45879
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:17.648223 7100 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:17.649816 7100 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:17.662051 7106 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:18.274255 7096 heartbeater.cc:499] Master 127.2.74.126:45231 was elected leader, sending a full tablet report...
W20250812 01:54:19.064857 7105 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 7100
W20250812 01:54:19.214906 7105 kernel_stack_watchdog.cc:198] Thread 7100 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:54:17.662464 7107 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:19.215637 7100 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.553s user 0.573s sys 0.980s
W20250812 01:54:19.216116 7100 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.554s user 0.574s sys 0.980s
W20250812 01:54:19.217953 7109 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:19.220247 7108 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1553 milliseconds
I20250812 01:54:19.220304 7100 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:19.221513 7100 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:19.223383 7100 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:19.224682 7100 hybrid_clock.cc:648] HybridClock initialized: now 1754963659224649 us; error 51 us; skew 500 ppm
I20250812 01:54:19.225381 7100 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:19.230963 7100 webserver.cc:489] Webserver started at http://127.2.74.66:37345/ using document root <none> and password file <none>
I20250812 01:54:19.231792 7100 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:19.231969 7100 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:19.232349 7100 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:19.236626 7100 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "85881f67d05b4ca284c7bee28066db69"
format_stamp: "Formatted at 2025-08-12 01:54:19 on dist-test-slave-3nxt"
I20250812 01:54:19.237757 7100 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "85881f67d05b4ca284c7bee28066db69"
format_stamp: "Formatted at 2025-08-12 01:54:19 on dist-test-slave-3nxt"
I20250812 01:54:19.244982 7100 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250812 01:54:19.250487 7116 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:19.251497 7100 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:54:19.251811 7100 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "85881f67d05b4ca284c7bee28066db69"
format_stamp: "Formatted at 2025-08-12 01:54:19 on dist-test-slave-3nxt"
I20250812 01:54:19.252127 7100 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:19.307063 7100 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:19.308490 7100 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:19.308959 7100 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:19.311439 7100 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:19.315455 7100 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:19.315649 7100 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:19.315924 7100 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:19.316072 7100 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:19.445801 7100 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:44681
I20250812 01:54:19.445895 7228 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:44681 every 8 connection(s)
I20250812 01:54:19.448283 7100 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250812 01:54:19.456485 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 7100
I20250812 01:54:19.457016 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250812 01:54:19.464427 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--builtin_ntp_servers=127.2.74.84:45879
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250812 01:54:19.472250 7229 heartbeater.cc:344] Connected to a master server at 127.2.74.126:45231
I20250812 01:54:19.472828 7229 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:19.474208 7229 heartbeater.cc:507] Master 127.2.74.126:45231 requested a full tablet report, sending...
I20250812 01:54:19.476657 6908 ts_manager.cc:194] Registered new tserver with Master: 85881f67d05b4ca284c7bee28066db69 (127.2.74.66:44681)
I20250812 01:54:19.478452 6908 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:48229
W20250812 01:54:19.767401 7233 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:19.767879 7233 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:19.768325 7233 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:19.799670 7233 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250812 01:54:19.800050 7233 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:19.800817 7233 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:54:19.835420 7233 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:45879
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:19.836699 7233 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:19.838244 7233 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:19.850001 7239 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:20.481595 7229 heartbeater.cc:499] Master 127.2.74.126:45231 was elected leader, sending a full tablet report...
W20250812 01:54:21.254721 7238 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 7233
W20250812 01:54:21.584576 7238 kernel_stack_watchdog.cc:198] Thread 7233 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 398ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:54:19.850916 7240 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:21.586792 7233 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.734s user 0.001s sys 0.001s
W20250812 01:54:21.586875 7241 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1735 milliseconds
W20250812 01:54:21.587155 7233 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.734s user 0.001s sys 0.001s
W20250812 01:54:21.591225 7243 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:21.591256 7233 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:21.592501 7233 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:21.594511 7233 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:21.595841 7233 hybrid_clock.cc:648] HybridClock initialized: now 1754963661595805 us; error 53 us; skew 500 ppm
I20250812 01:54:21.596660 7233 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:21.603422 7233 webserver.cc:489] Webserver started at http://127.2.74.67:37605/ using document root <none> and password file <none>
I20250812 01:54:21.604349 7233 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:21.604547 7233 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:21.605059 7233 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:21.609434 7233 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "fe20da274ee345e985d8a8db40df0ff8"
format_stamp: "Formatted at 2025-08-12 01:54:21 on dist-test-slave-3nxt"
I20250812 01:54:21.610515 7233 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "fe20da274ee345e985d8a8db40df0ff8"
format_stamp: "Formatted at 2025-08-12 01:54:21 on dist-test-slave-3nxt"
I20250812 01:54:21.617549 7233 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.003s sys 0.004s
I20250812 01:54:21.623220 7249 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:21.624235 7233 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.000s
I20250812 01:54:21.624559 7233 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "fe20da274ee345e985d8a8db40df0ff8"
format_stamp: "Formatted at 2025-08-12 01:54:21 on dist-test-slave-3nxt"
I20250812 01:54:21.624930 7233 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:21.684106 7233 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:21.685639 7233 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:21.686091 7233 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:21.688459 7233 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:21.692204 7233 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:21.692384 7233 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:21.692569 7233 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:21.692727 7233 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.001s sys 0.000s
I20250812 01:54:21.825222 7233 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:40151
I20250812 01:54:21.825318 7361 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:40151 every 8 connection(s)
I20250812 01:54:21.827739 7233 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250812 01:54:21.838009 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 7233
I20250812 01:54:21.838505 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250812 01:54:21.845489 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.68:0
--local_ip_for_outbound_sockets=127.2.74.68
--webserver_interface=127.2.74.68
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--builtin_ntp_servers=127.2.74.84:45879
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250812 01:54:21.852749 7362 heartbeater.cc:344] Connected to a master server at 127.2.74.126:45231
I20250812 01:54:21.853266 7362 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:21.854329 7362 heartbeater.cc:507] Master 127.2.74.126:45231 requested a full tablet report, sending...
I20250812 01:54:21.856312 6908 ts_manager.cc:194] Registered new tserver with Master: fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151)
I20250812 01:54:21.857547 6908 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:55699
W20250812 01:54:22.149151 7366 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:22.149597 7366 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:22.150121 7366 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:22.181891 7366 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250812 01:54:22.182304 7366 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:22.183075 7366 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.68
I20250812 01:54:22.218070 7366 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:45879
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.68:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.2.74.68
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.68
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:22.219424 7366 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:22.220911 7366 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:22.231370 7372 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:22.861403 7362 heartbeater.cc:499] Master 127.2.74.126:45231 was elected leader, sending a full tablet report...
W20250812 01:54:22.232851 7373 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:23.450870 7375 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:23.453696 7374 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1215 milliseconds
I20250812 01:54:23.453886 7366 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:23.455400 7366 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:23.458196 7366 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:23.459673 7366 hybrid_clock.cc:648] HybridClock initialized: now 1754963663459608 us; error 63 us; skew 500 ppm
I20250812 01:54:23.460683 7366 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:23.467312 7366 webserver.cc:489] Webserver started at http://127.2.74.68:37595/ using document root <none> and password file <none>
I20250812 01:54:23.468178 7366 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:23.468353 7366 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:23.468833 7366 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:23.473013 7366 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "8acfaac2a1bd4b9493542027c2cc3bed"
format_stamp: "Formatted at 2025-08-12 01:54:23 on dist-test-slave-3nxt"
I20250812 01:54:23.474040 7366 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "8acfaac2a1bd4b9493542027c2cc3bed"
format_stamp: "Formatted at 2025-08-12 01:54:23 on dist-test-slave-3nxt"
I20250812 01:54:23.480701 7366 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250812 01:54:23.486155 7383 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:23.487210 7366 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:54:23.487510 7366 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "8acfaac2a1bd4b9493542027c2cc3bed"
format_stamp: "Formatted at 2025-08-12 01:54:23 on dist-test-slave-3nxt"
I20250812 01:54:23.487829 7366 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:23.536893 7366 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:23.538336 7366 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:23.538755 7366 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:23.541410 7366 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:23.545516 7366 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:23.545704 7366 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:23.545971 7366 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:23.546119 7366 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:23.679767 7366 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.68:42221
I20250812 01:54:23.679872 7496 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.68:42221 every 8 connection(s)
I20250812 01:54:23.682360 7366 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250812 01:54:23.686386 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 7366
I20250812 01:54:23.687077 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250812 01:54:23.695055 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.69:0
--local_ip_for_outbound_sockets=127.2.74.69
--webserver_interface=127.2.74.69
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--builtin_ntp_servers=127.2.74.84:45879
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250812 01:54:23.704425 7497 heartbeater.cc:344] Connected to a master server at 127.2.74.126:45231
I20250812 01:54:23.704888 7497 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:23.705869 7497 heartbeater.cc:507] Master 127.2.74.126:45231 requested a full tablet report, sending...
I20250812 01:54:23.708184 6908 ts_manager.cc:194] Registered new tserver with Master: 8acfaac2a1bd4b9493542027c2cc3bed (127.2.74.68:42221)
I20250812 01:54:23.709636 6908 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.68:40317
W20250812 01:54:23.999815 7501 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:24.000336 7501 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:24.000874 7501 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:24.033284 7501 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250812 01:54:24.033739 7501 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:24.034524 7501 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.69
I20250812 01:54:24.073323 7501 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:45879
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.69:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.2.74.69
--webserver_port=0
--tserver_master_addrs=127.2.74.126:45231
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.69
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:24.074649 7501 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:24.076251 7501 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:24.087585 7507 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:24.712941 7497 heartbeater.cc:499] Master 127.2.74.126:45231 was elected leader, sending a full tablet report...
W20250812 01:54:24.089267 7508 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:25.445732 7510 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:25.449056 7501 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.360s user 0.464s sys 0.881s
W20250812 01:54:25.449357 7509 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1354 milliseconds
W20250812 01:54:25.449522 7501 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.361s user 0.465s sys 0.881s
I20250812 01:54:25.449887 7501 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:25.451283 7501 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:25.457094 7501 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:25.458603 7501 hybrid_clock.cc:648] HybridClock initialized: now 1754963665458568 us; error 50 us; skew 500 ppm
I20250812 01:54:25.460042 7501 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:25.469347 7501 webserver.cc:489] Webserver started at http://127.2.74.69:42749/ using document root <none> and password file <none>
I20250812 01:54:25.470880 7501 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:25.471220 7501 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:25.471891 7501 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:25.479446 7501 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "1ddae331ea87447f9f7c33c827da5644"
format_stamp: "Formatted at 2025-08-12 01:54:25 on dist-test-slave-3nxt"
I20250812 01:54:25.481238 7501 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "1ddae331ea87447f9f7c33c827da5644"
format_stamp: "Formatted at 2025-08-12 01:54:25 on dist-test-slave-3nxt"
I20250812 01:54:25.491534 7501 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.009s sys 0.001s
I20250812 01:54:25.499485 7518 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:25.500682 7501 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.007s sys 0.000s
I20250812 01:54:25.501083 7501 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "1ddae331ea87447f9f7c33c827da5644"
format_stamp: "Formatted at 2025-08-12 01:54:25 on dist-test-slave-3nxt"
I20250812 01:54:25.501585 7501 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:25.577186 7501 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:25.578611 7501 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:25.579034 7501 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:25.581463 7501 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:25.585445 7501 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:25.585636 7501 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:25.585909 7501 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:25.586093 7501 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:25.723719 7501 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.69:40143
I20250812 01:54:25.723820 7630 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.69:40143 every 8 connection(s)
I20250812 01:54:25.726284 7501 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250812 01:54:25.732908 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 7501
I20250812 01:54:25.733289 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250812 01:54:25.746750 7631 heartbeater.cc:344] Connected to a master server at 127.2.74.126:45231
I20250812 01:54:25.747143 7631 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:25.748080 7631 heartbeater.cc:507] Master 127.2.74.126:45231 requested a full tablet report, sending...
I20250812 01:54:25.750012 6908 ts_manager.cc:194] Registered new tserver with Master: 1ddae331ea87447f9f7c33c827da5644 (127.2.74.69:40143)
I20250812 01:54:25.751441 6908 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.69:57785
I20250812 01:54:25.754197 2345 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250812 01:54:25.788442 6908 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:36814:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250812 01:54:25.868198 7164 tablet_service.cc:1468] Processing CreateTablet for tablet 3922978286c849c1b209f35a02ad4b2a (DEFAULT_TABLE table=TestTable [id=59ab1ab6af9f47cab99c76205a6ccb2a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:25.870296 7164 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3922978286c849c1b209f35a02ad4b2a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:25.881337 7566 tablet_service.cc:1468] Processing CreateTablet for tablet 3922978286c849c1b209f35a02ad4b2a (DEFAULT_TABLE table=TestTable [id=59ab1ab6af9f47cab99c76205a6ccb2a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:25.882874 7566 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3922978286c849c1b209f35a02ad4b2a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:25.883652 7297 tablet_service.cc:1468] Processing CreateTablet for tablet 3922978286c849c1b209f35a02ad4b2a (DEFAULT_TABLE table=TestTable [id=59ab1ab6af9f47cab99c76205a6ccb2a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:25.885412 7297 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3922978286c849c1b209f35a02ad4b2a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:25.909971 7650 tablet_bootstrap.cc:492] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: Bootstrap starting.
I20250812 01:54:25.917454 7650 tablet_bootstrap.cc:654] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:25.918362 7651 tablet_bootstrap.cc:492] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: Bootstrap starting.
I20250812 01:54:25.919647 7652 tablet_bootstrap.cc:492] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: Bootstrap starting.
I20250812 01:54:25.920984 7650 log.cc:826] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:25.926620 7652 tablet_bootstrap.cc:654] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:25.927906 7651 tablet_bootstrap.cc:654] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:25.928910 7652 log.cc:826] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:25.930810 7651 log.cc:826] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:25.934638 7650 tablet_bootstrap.cc:492] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: No bootstrap required, opened a new log
I20250812 01:54:25.935102 7650 ts_tablet_manager.cc:1397] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: Time spent bootstrapping tablet: real 0.026s user 0.007s sys 0.015s
I20250812 01:54:25.936174 7652 tablet_bootstrap.cc:492] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: No bootstrap required, opened a new log
I20250812 01:54:25.936729 7652 ts_tablet_manager.cc:1397] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: Time spent bootstrapping tablet: real 0.018s user 0.006s sys 0.008s
I20250812 01:54:25.937171 7651 tablet_bootstrap.cc:492] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: No bootstrap required, opened a new log
I20250812 01:54:25.937652 7651 ts_tablet_manager.cc:1397] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: Time spent bootstrapping tablet: real 0.020s user 0.010s sys 0.007s
I20250812 01:54:25.956213 7652 raft_consensus.cc:357] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:25.956957 7652 raft_consensus.cc:383] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:25.957224 7652 raft_consensus.cc:738] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fe20da274ee345e985d8a8db40df0ff8, State: Initialized, Role: FOLLOWER
I20250812 01:54:25.958005 7652 consensus_queue.cc:260] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:25.961948 7652 ts_tablet_manager.cc:1428] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: Time spent starting tablet: real 0.025s user 0.023s sys 0.000s
I20250812 01:54:25.962388 7650 raft_consensus.cc:357] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:25.963189 7650 raft_consensus.cc:383] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:25.963485 7650 raft_consensus.cc:738] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 85881f67d05b4ca284c7bee28066db69, State: Initialized, Role: FOLLOWER
I20250812 01:54:25.964288 7650 consensus_queue.cc:260] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:25.965034 7651 raft_consensus.cc:357] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:25.965952 7651 raft_consensus.cc:383] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:25.966300 7651 raft_consensus.cc:738] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1ddae331ea87447f9f7c33c827da5644, State: Initialized, Role: FOLLOWER
I20250812 01:54:25.967204 7651 consensus_queue.cc:260] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:25.968658 7650 ts_tablet_manager.cc:1428] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: Time spent starting tablet: real 0.033s user 0.033s sys 0.000s
I20250812 01:54:25.971277 7631 heartbeater.cc:499] Master 127.2.74.126:45231 was elected leader, sending a full tablet report...
I20250812 01:54:25.972683 7651 ts_tablet_manager.cc:1428] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: Time spent starting tablet: real 0.035s user 0.026s sys 0.008s
W20250812 01:54:25.980875 7632 tablet.cc:2378] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:54:26.022637 7656 raft_consensus.cc:491] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:54:26.023123 7656 raft_consensus.cc:513] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:26.025437 7656 leader_election.cc:290] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 85881f67d05b4ca284c7bee28066db69 (127.2.74.66:44681), 1ddae331ea87447f9f7c33c827da5644 (127.2.74.69:40143)
I20250812 01:54:26.035907 7658 raft_consensus.cc:491] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:54:26.036522 7658 raft_consensus.cc:513] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:26.037261 7184 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "fe20da274ee345e985d8a8db40df0ff8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "85881f67d05b4ca284c7bee28066db69" is_pre_election: true
I20250812 01:54:26.037253 7586 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "fe20da274ee345e985d8a8db40df0ff8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1ddae331ea87447f9f7c33c827da5644" is_pre_election: true
I20250812 01:54:26.038069 7586 raft_consensus.cc:2466] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fe20da274ee345e985d8a8db40df0ff8 in term 0.
I20250812 01:54:26.038141 7184 raft_consensus.cc:2466] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fe20da274ee345e985d8a8db40df0ff8 in term 0.
I20250812 01:54:26.039149 7658 leader_election.cc:290] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151), 85881f67d05b4ca284c7bee28066db69 (127.2.74.66:44681)
I20250812 01:54:26.039564 7253 leader_election.cc:304] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 85881f67d05b4ca284c7bee28066db69, fe20da274ee345e985d8a8db40df0ff8; no voters:
I20250812 01:54:26.040928 7656 raft_consensus.cc:2802] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:54:26.041365 7656 raft_consensus.cc:491] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:54:26.041759 7656 raft_consensus.cc:3058] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:26.051132 7656 raft_consensus.cc:513] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:26.054593 7656 leader_election.cc:290] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [CANDIDATE]: Term 1 election: Requested vote from peers 85881f67d05b4ca284c7bee28066db69 (127.2.74.66:44681), 1ddae331ea87447f9f7c33c827da5644 (127.2.74.69:40143)
I20250812 01:54:26.054889 7184 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "1ddae331ea87447f9f7c33c827da5644" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "85881f67d05b4ca284c7bee28066db69" is_pre_election: true
I20250812 01:54:26.054855 7317 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "1ddae331ea87447f9f7c33c827da5644" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fe20da274ee345e985d8a8db40df0ff8" is_pre_election: true
I20250812 01:54:26.055641 7317 raft_consensus.cc:2391] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 1ddae331ea87447f9f7c33c827da5644 in current term 1: Already voted for candidate fe20da274ee345e985d8a8db40df0ff8 in this term.
I20250812 01:54:26.055460 7184 raft_consensus.cc:2466] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 1ddae331ea87447f9f7c33c827da5644 in term 0.
I20250812 01:54:26.055560 7183 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "fe20da274ee345e985d8a8db40df0ff8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "85881f67d05b4ca284c7bee28066db69"
I20250812 01:54:26.056123 7183 raft_consensus.cc:3058] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:26.057492 7522 leader_election.cc:304] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1ddae331ea87447f9f7c33c827da5644, 85881f67d05b4ca284c7bee28066db69; no voters: fe20da274ee345e985d8a8db40df0ff8
I20250812 01:54:26.058364 7658 raft_consensus.cc:2802] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:54:26.058698 7658 raft_consensus.cc:491] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:54:26.059015 7658 raft_consensus.cc:3058] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:26.059314 7586 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "fe20da274ee345e985d8a8db40df0ff8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1ddae331ea87447f9f7c33c827da5644"
I20250812 01:54:26.062335 7183 raft_consensus.cc:2466] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fe20da274ee345e985d8a8db40df0ff8 in term 1.
I20250812 01:54:26.063129 7253 leader_election.cc:304] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 85881f67d05b4ca284c7bee28066db69, fe20da274ee345e985d8a8db40df0ff8; no voters:
I20250812 01:54:26.063710 7656 raft_consensus.cc:2802] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:26.063978 7658 raft_consensus.cc:513] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:26.065171 7586 raft_consensus.cc:2391] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate fe20da274ee345e985d8a8db40df0ff8 in current term 1: Already voted for candidate 1ddae331ea87447f9f7c33c827da5644 in this term.
I20250812 01:54:26.066143 7656 raft_consensus.cc:695] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [term 1 LEADER]: Becoming Leader. State: Replica: fe20da274ee345e985d8a8db40df0ff8, State: Running, Role: LEADER
I20250812 01:54:26.066725 7317 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "1ddae331ea87447f9f7c33c827da5644" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fe20da274ee345e985d8a8db40df0ff8"
I20250812 01:54:26.067616 7658 leader_election.cc:290] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [CANDIDATE]: Term 1 election: Requested vote from peers fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151), 85881f67d05b4ca284c7bee28066db69 (127.2.74.66:44681)
I20250812 01:54:26.068097 7183 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "1ddae331ea87447f9f7c33c827da5644" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "85881f67d05b4ca284c7bee28066db69"
I20250812 01:54:26.067143 7656 consensus_queue.cc:237] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:26.068933 7183 raft_consensus.cc:2391] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate 1ddae331ea87447f9f7c33c827da5644 in current term 1: Already voted for candidate fe20da274ee345e985d8a8db40df0ff8 in this term.
I20250812 01:54:26.070228 7522 leader_election.cc:304] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [CANDIDATE]: Term 1 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 1ddae331ea87447f9f7c33c827da5644; no voters: 85881f67d05b4ca284c7bee28066db69, fe20da274ee345e985d8a8db40df0ff8
I20250812 01:54:26.070910 7658 raft_consensus.cc:2747] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 1 FOLLOWER]: Leader election lost for term 1. Reason: could not achieve majority
I20250812 01:54:26.080365 6907 catalog_manager.cc:5582] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8 reported cstate change: term changed from 0 to 1, leader changed from <none> to fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67). New cstate: current_term: 1 leader_uuid: "fe20da274ee345e985d8a8db40df0ff8" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } health_report { overall_health: UNKNOWN } } }
W20250812 01:54:26.086712 7363 tablet.cc:2378] T 3922978286c849c1b209f35a02ad4b2a P fe20da274ee345e985d8a8db40df0ff8: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:54:26.105019 2345 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250812 01:54:26.108325 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 85881f67d05b4ca284c7bee28066db69 to finish bootstrapping
I20250812 01:54:26.122905 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver fe20da274ee345e985d8a8db40df0ff8 to finish bootstrapping
I20250812 01:54:26.133730 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1ddae331ea87447f9f7c33c827da5644 to finish bootstrapping
I20250812 01:54:26.145185 2345 test_util.cc:276] Using random seed: 1374293897
I20250812 01:54:26.171963 2345 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250812 01:54:26.173365 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 7233
W20250812 01:54:26.200793 7669 negotiation.cc:337] Failed RPC negotiation. Trace:
0812 01:54:26.184792 (+ 0us) reactor.cc:625] Submitting negotiation task for client connection to 127.2.74.67:40151 (local address 127.0.0.1:58744)
0812 01:54:26.185417 (+ 625us) negotiation.cc:107] Waiting for socket to connect
0812 01:54:26.185461 (+ 44us) client_negotiation.cc:174] Beginning negotiation
0812 01:54:26.185662 (+ 201us) client_negotiation.cc:252] Sending NEGOTIATE NegotiatePB request
0812 01:54:26.199535 (+ 13873us) negotiation.cc:327] Negotiation complete: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
Metrics: {"client-negotiator.queue_time_us":75}
W20250812 01:54:26.210628 7230 tablet.cc:2378] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250812 01:54:26.212247 7668 meta_cache.cc:302] tablet 3922978286c849c1b209f35a02ad4b2a: replica fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151) has failed: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250812 01:54:26.232388 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.251446 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.273315 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.282131 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.309463 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.323094 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.354552 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.370191 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.407410 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.428107 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.474220 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.498734 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.550905 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.577672 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.635304 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.668428 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.735049 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.769394 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.836100 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.871742 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.952504 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:26.991621 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.076768 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.118925 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.211684 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.259836 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.314595 7668 meta_cache.cc:302] tablet 3922978286c849c1b209f35a02ad4b2a: replica fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151) has failed: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111)
W20250812 01:54:27.361467 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.411103 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.516363 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.566699 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.677155 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.732928 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.843214 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Illegal state: replica 85881f67d05b4ca284c7bee28066db69 is not leader of this config: current role FOLLOWER
W20250812 01:54:27.901777 7546 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:38864: Illegal state: replica 1ddae331ea87447f9f7c33c827da5644 is not leader of this config: current role FOLLOWER
I20250812 01:54:27.953369 7687 raft_consensus.cc:491] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:54:27.954012 7687 raft_consensus.cc:513] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
W20250812 01:54:27.966056 7120 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111)
I20250812 01:54:27.977644 7687 leader_election.cc:290] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151), 1ddae331ea87447f9f7c33c827da5644 (127.2.74.69:40143)
W20250812 01:54:27.987176 7120 leader_election.cc:336] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151): Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111)
I20250812 01:54:27.994778 7586 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "85881f67d05b4ca284c7bee28066db69" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1ddae331ea87447f9f7c33c827da5644" is_pre_election: true
I20250812 01:54:27.995433 7586 raft_consensus.cc:2466] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 85881f67d05b4ca284c7bee28066db69 in term 1.
I20250812 01:54:27.996726 7119 leader_election.cc:304] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1ddae331ea87447f9f7c33c827da5644, 85881f67d05b4ca284c7bee28066db69; no voters: fe20da274ee345e985d8a8db40df0ff8
I20250812 01:54:27.997640 7687 raft_consensus.cc:2802] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250812 01:54:27.998035 7687 raft_consensus.cc:491] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:54:27.998370 7687 raft_consensus.cc:3058] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:54:28.006409 7687 raft_consensus.cc:513] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:28.008476 7687 leader_election.cc:290] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [CANDIDATE]: Term 2 election: Requested vote from peers fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151), 1ddae331ea87447f9f7c33c827da5644 (127.2.74.69:40143)
I20250812 01:54:28.009755 7586 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3922978286c849c1b209f35a02ad4b2a" candidate_uuid: "85881f67d05b4ca284c7bee28066db69" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1ddae331ea87447f9f7c33c827da5644"
I20250812 01:54:28.010356 7586 raft_consensus.cc:3058] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 1 FOLLOWER]: Advancing to term 2
W20250812 01:54:28.013792 7120 leader_election.cc:336] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151): Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111)
I20250812 01:54:28.017832 7586 raft_consensus.cc:2466] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 85881f67d05b4ca284c7bee28066db69 in term 2.
I20250812 01:54:28.019158 7119 leader_election.cc:304] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 1ddae331ea87447f9f7c33c827da5644, 85881f67d05b4ca284c7bee28066db69; no voters: fe20da274ee345e985d8a8db40df0ff8
I20250812 01:54:28.020013 7687 raft_consensus.cc:2802] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 2 FOLLOWER]: Leader election won for term 2
W20250812 01:54:28.022018 7144 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:32954: Service unavailable: leader is not yet ready
I20250812 01:54:28.024724 7687 raft_consensus.cc:695] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [term 2 LEADER]: Becoming Leader. State: Replica: 85881f67d05b4ca284c7bee28066db69, State: Running, Role: LEADER
I20250812 01:54:28.025868 7687 consensus_queue.cc:237] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } }
I20250812 01:54:28.041337 6907 catalog_manager.cc:5582] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 reported cstate change: term changed from 1 to 2, leader changed from fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67) to 85881f67d05b4ca284c7bee28066db69 (127.2.74.66). New cstate: current_term: 2 leader_uuid: "85881f67d05b4ca284c7bee28066db69" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fe20da274ee345e985d8a8db40df0ff8" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 40151 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "85881f67d05b4ca284c7bee28066db69" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 44681 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 } health_report { overall_health: UNKNOWN } } }
I20250812 01:54:28.089219 7586 raft_consensus.cc:1273] T 3922978286c849c1b209f35a02ad4b2a P 1ddae331ea87447f9f7c33c827da5644 [term 2 FOLLOWER]: Refusing update from remote peer 85881f67d05b4ca284c7bee28066db69: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250812 01:54:28.090732 7692 consensus_queue.cc:1035] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1ddae331ea87447f9f7c33c827da5644" member_type: VOTER last_known_addr { host: "127.2.74.69" port: 40143 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250812 01:54:28.091450 7120 consensus_peers.cc:489] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 -> Peer fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151): Couldn't send request to peer fe20da274ee345e985d8a8db40df0ff8. Status: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250812 01:54:28.107124 7698 mvcc.cc:204] Tried to move back new op lower bound from 7188331184478048256 to 7188331184243781632. Current Snapshot: MvccSnapshot[applied={T|T < 7188331184478048256}]
I20250812 01:54:28.311620 7164 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:54:28.330049 7566 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:28.339243 7031 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:28.355032 7431 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:30.093143 7566 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:30.113979 7431 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:30.158772 7164 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:54:30.173116 7031 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250812 01:54:30.578060 7120 consensus_peers.cc:489] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 -> Peer fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151): Couldn't send request to peer fe20da274ee345e985d8a8db40df0ff8. Status: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
W20250812 01:54:31.414458 7225 debug-util.cc:398] Leaking SignalData structure 0x7b08000d61c0 after lost signal to thread 7101
W20250812 01:54:31.415591 7225 debug-util.cc:398] Leaking SignalData structure 0x7b08000d4260 after lost signal to thread 7228
I20250812 01:54:32.014780 7164 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:54:32.044693 7566 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:32.083464 7031 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:54:32.103729 7431 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250812 01:54:33.342506 7120 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111) [suppressed 11 similar messages]
W20250812 01:54:33.345882 7120 consensus_peers.cc:489] T 3922978286c849c1b209f35a02ad4b2a P 85881f67d05b4ca284c7bee28066db69 -> Peer fe20da274ee345e985d8a8db40df0ff8 (127.2.74.67:40151): Couldn't send request to peer fe20da274ee345e985d8a8db40df0ff8. Status: Network error: Client connection negotiation failed: client connection to 127.2.74.67:40151: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20250812 01:54:34.543422 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 6967
I20250812 01:54:34.569175 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 7100
I20250812 01:54:34.609597 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 7366
I20250812 01:54:34.637037 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 7501
I20250812 01:54:34.669360 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 6874
2025-08-12T01:54:34Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (21667 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (21668 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250812 01:54:34.731894 2345 test_util.cc:276] Using random seed: 1382880601
I20250812 01:54:34.736083 2345 ts_itest-base.cc:115] Starting cluster with:
I20250812 01:54:34.736251 2345 ts_itest-base.cc:116] --------------
I20250812 01:54:34.736367 2345 ts_itest-base.cc:117] 1 tablet servers
I20250812 01:54:34.736481 2345 ts_itest-base.cc:118] 1 replicas per TS
I20250812 01:54:34.736614 2345 ts_itest-base.cc:119] --------------
2025-08-12T01:54:34Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:54:34Z Disabled control of system clock
I20250812 01:54:34.782814 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:38153
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:44927
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:38153 with env {}
W20250812 01:54:35.087659 7817 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:35.088305 7817 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:35.088753 7817 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:35.120184 7817 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:54:35.120504 7817 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:35.120729 7817 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:54:35.120926 7817 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:54:35.159849 7817 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:44927
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:38153
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:38153
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:35.161197 7817 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:35.162762 7817 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:35.174382 7823 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:35.176447 7824 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:35.182261 7826 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:36.371688 7825 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250812 01:54:36.371819 7817 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:36.376109 7817 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:36.379416 7817 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:36.380870 7817 hybrid_clock.cc:648] HybridClock initialized: now 1754963676380824 us; error 60 us; skew 500 ppm
I20250812 01:54:36.381659 7817 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:36.388520 7817 webserver.cc:489] Webserver started at http://127.2.74.126:36559/ using document root <none> and password file <none>
I20250812 01:54:36.389564 7817 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:36.389776 7817 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:36.390273 7817 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:36.394796 7817 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "05a2dc2f873e45a6aefbb49e65af4c3e"
format_stamp: "Formatted at 2025-08-12 01:54:36 on dist-test-slave-3nxt"
I20250812 01:54:36.395977 7817 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "05a2dc2f873e45a6aefbb49e65af4c3e"
format_stamp: "Formatted at 2025-08-12 01:54:36 on dist-test-slave-3nxt"
I20250812 01:54:36.403488 7817 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250812 01:54:36.409415 7833 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:36.410504 7817 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.004s
I20250812 01:54:36.410856 7817 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
uuid: "05a2dc2f873e45a6aefbb49e65af4c3e"
format_stamp: "Formatted at 2025-08-12 01:54:36 on dist-test-slave-3nxt"
I20250812 01:54:36.411212 7817 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:36.459302 7817 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:36.460872 7817 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:36.461349 7817 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:36.533356 7817 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:38153
I20250812 01:54:36.533443 7884 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:38153 every 8 connection(s)
I20250812 01:54:36.536121 7817 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250812 01:54:36.542085 7885 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:36.542881 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 7817
I20250812 01:54:36.543284 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250812 01:54:36.562242 7885 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e: Bootstrap starting.
I20250812 01:54:36.568397 7885 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:36.570159 7885 log.cc:826] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:36.574932 7885 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e: No bootstrap required, opened a new log
I20250812 01:54:36.594296 7885 raft_consensus.cc:357] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 38153 } }
I20250812 01:54:36.595080 7885 raft_consensus.cc:383] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:36.595325 7885 raft_consensus.cc:738] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 05a2dc2f873e45a6aefbb49e65af4c3e, State: Initialized, Role: FOLLOWER
I20250812 01:54:36.595976 7885 consensus_queue.cc:260] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 38153 } }
I20250812 01:54:36.596474 7885 raft_consensus.cc:397] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:54:36.596783 7885 raft_consensus.cc:491] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:54:36.597079 7885 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:36.601444 7885 raft_consensus.cc:513] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 38153 } }
I20250812 01:54:36.602161 7885 leader_election.cc:304] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 05a2dc2f873e45a6aefbb49e65af4c3e; no voters:
I20250812 01:54:36.603825 7885 leader_election.cc:290] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:54:36.604619 7890 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:36.606899 7890 raft_consensus.cc:695] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [term 1 LEADER]: Becoming Leader. State: Replica: 05a2dc2f873e45a6aefbb49e65af4c3e, State: Running, Role: LEADER
I20250812 01:54:36.607662 7890 consensus_queue.cc:237] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 38153 } }
I20250812 01:54:36.608469 7885 sys_catalog.cc:564] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:54:36.618211 7892 sys_catalog.cc:455] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [sys.catalog]: SysCatalogTable state changed. Reason: New leader 05a2dc2f873e45a6aefbb49e65af4c3e. Latest consensus state: current_term: 1 leader_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 38153 } } }
I20250812 01:54:36.618875 7891 sys_catalog.cc:455] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05a2dc2f873e45a6aefbb49e65af4c3e" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 38153 } } }
I20250812 01:54:36.619294 7892 sys_catalog.cc:458] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:36.619520 7891 sys_catalog.cc:458] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:36.622714 7898 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:54:36.636173 7898 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:54:36.652375 7898 catalog_manager.cc:1349] Generated new cluster ID: 8588093f1f2c4ac6a527b1a3313f4e71
I20250812 01:54:36.652733 7898 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:54:36.672165 7898 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:54:36.673750 7898 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:54:36.707134 7898 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 05a2dc2f873e45a6aefbb49e65af4c3e: Generated new TSK 0
I20250812 01:54:36.708117 7898 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:54:36.717015 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:38153
--builtin_ntp_servers=127.2.74.84:44927
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250812 01:54:37.031607 7909 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:37.032176 7909 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:37.032747 7909 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:37.064600 7909 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:37.065593 7909 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:54:37.101212 7909 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:44927
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:38153
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:37.102550 7909 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:37.104200 7909 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:37.116989 7915 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:38.520490 7914 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 7909
W20250812 01:54:38.914227 7914 kernel_stack_watchdog.cc:198] Thread 7909 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:54:37.117767 7916 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:38.915581 7909 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.797s user 0.000s sys 0.002s
W20250812 01:54:38.915956 7909 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.797s user 0.000s sys 0.002s
W20250812 01:54:38.922082 7917 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1802 milliseconds
W20250812 01:54:38.922663 7918 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:38.922741 7909 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:38.924190 7909 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:38.926820 7909 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:38.928331 7909 hybrid_clock.cc:648] HybridClock initialized: now 1754963678928276 us; error 52 us; skew 500 ppm
I20250812 01:54:38.929419 7909 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:38.937024 7909 webserver.cc:489] Webserver started at http://127.2.74.65:39887/ using document root <none> and password file <none>
I20250812 01:54:38.938373 7909 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:38.938675 7909 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:38.939265 7909 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:38.945896 7909 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f"
format_stamp: "Formatted at 2025-08-12 01:54:38 on dist-test-slave-3nxt"
I20250812 01:54:38.947503 7909 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f"
format_stamp: "Formatted at 2025-08-12 01:54:38 on dist-test-slave-3nxt"
I20250812 01:54:38.956964 7909 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.011s sys 0.001s
I20250812 01:54:38.964635 7925 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:38.965875 7909 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250812 01:54:38.966284 7909 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f"
format_stamp: "Formatted at 2025-08-12 01:54:38 on dist-test-slave-3nxt"
I20250812 01:54:38.966738 7909 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:39.035181 7909 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:39.037189 7909 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:39.037786 7909 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:39.041095 7909 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:39.046854 7909 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:39.047147 7909 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:39.047472 7909 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:39.047693 7909 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:39.208922 7909 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:38323
I20250812 01:54:39.208993 8037 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:38323 every 8 connection(s)
I20250812 01:54:39.212743 7909 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250812 01:54:39.218111 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 7909
I20250812 01:54:39.218653 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754963523148890-2345-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250812 01:54:39.238296 8038 heartbeater.cc:344] Connected to a master server at 127.2.74.126:38153
I20250812 01:54:39.238834 8038 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:39.240082 8038 heartbeater.cc:507] Master 127.2.74.126:38153 requested a full tablet report, sending...
I20250812 01:54:39.242836 7849 ts_manager.cc:194] Registered new tserver with Master: c6eb779c97c0428d9ddfa10c95ef4c2f (127.2.74.65:38323)
I20250812 01:54:39.245707 7849 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:58435
I20250812 01:54:39.253859 2345 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250812 01:54:39.287338 7849 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:40796:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250812 01:54:39.350306 7973 tablet_service.cc:1468] Processing CreateTablet for tablet 88967254f3f040138b18392fe9a7943b (DEFAULT_TABLE table=TestTable [id=b37c3dbac5a64ef3a280678d458dc43d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:39.351735 7973 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 88967254f3f040138b18392fe9a7943b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:39.372223 8053 tablet_bootstrap.cc:492] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f: Bootstrap starting.
I20250812 01:54:39.379834 8053 tablet_bootstrap.cc:654] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:39.382324 8053 log.cc:826] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:39.387652 8053 tablet_bootstrap.cc:492] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f: No bootstrap required, opened a new log
I20250812 01:54:39.388154 8053 ts_tablet_manager.cc:1397] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f: Time spent bootstrapping tablet: real 0.016s user 0.013s sys 0.000s
I20250812 01:54:39.414295 8053 raft_consensus.cc:357] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 38323 } }
I20250812 01:54:39.415037 8053 raft_consensus.cc:383] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:39.415314 8053 raft_consensus.cc:738] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c6eb779c97c0428d9ddfa10c95ef4c2f, State: Initialized, Role: FOLLOWER
I20250812 01:54:39.416158 8053 consensus_queue.cc:260] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 38323 } }
I20250812 01:54:39.416864 8053 raft_consensus.cc:397] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:54:39.417197 8053 raft_consensus.cc:491] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:54:39.417608 8053 raft_consensus.cc:3058] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:39.424573 8053 raft_consensus.cc:513] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 38323 } }
I20250812 01:54:39.425590 8053 leader_election.cc:304] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c6eb779c97c0428d9ddfa10c95ef4c2f; no voters:
I20250812 01:54:39.427618 8053 leader_election.cc:290] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:54:39.427960 8055 raft_consensus.cc:2802] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:39.431362 8038 heartbeater.cc:499] Master 127.2.74.126:38153 was elected leader, sending a full tablet report...
I20250812 01:54:39.432098 8053 ts_tablet_manager.cc:1428] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f: Time spent starting tablet: real 0.044s user 0.041s sys 0.004s
I20250812 01:54:39.433405 8055 raft_consensus.cc:695] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [term 1 LEADER]: Becoming Leader. State: Replica: c6eb779c97c0428d9ddfa10c95ef4c2f, State: Running, Role: LEADER
I20250812 01:54:39.434082 8055 consensus_queue.cc:237] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 38323 } }
I20250812 01:54:39.447494 7849 catalog_manager.cc:5582] T 88967254f3f040138b18392fe9a7943b P c6eb779c97c0428d9ddfa10c95ef4c2f reported cstate change: term changed from 0 to 1, leader changed from <none> to c6eb779c97c0428d9ddfa10c95ef4c2f (127.2.74.65). New cstate: current_term: 1 leader_uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c6eb779c97c0428d9ddfa10c95ef4c2f" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 38323 } health_report { overall_health: HEALTHY } } }
I20250812 01:54:39.477492 2345 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250812 01:54:39.480809 2345 ts_itest-base.cc:246] Waiting for 1 tablets on tserver c6eb779c97c0428d9ddfa10c95ef4c2f to finish bootstrapping
I20250812 01:54:42.159943 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 7909
I20250812 01:54:42.186502 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 7817
2025-08-12T01:54:42Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (7509 ms)
[----------] 1 test from ListTableCliSimpleParamTest (7509 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250812 01:54:42.241431 2345 test_util.cc:276] Using random seed: 1390390141
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (12 ms)
[----------] 1 test from ListTableCliParamTest (12 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-08-12T01:54:42Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-12T01:54:42Z Disabled control of system clock
I20250812 01:54:42.294440 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:41705
--webserver_interface=127.2.74.126
--webserver_port=0
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:41705 with env {}
W20250812 01:54:42.598194 8081 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:42.598794 8081 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:42.599261 8081 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:42.630992 8081 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:54:42.631318 8081 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:42.631580 8081 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:54:42.631817 8081 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:54:42.667339 8081 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:41705
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:41705
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:42.668676 8081 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:42.670328 8081 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:42.681556 8087 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:42.682318 8088 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:43.882411 8090 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:43.884785 8089 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1198 milliseconds
I20250812 01:54:43.884892 8081 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:43.886284 8081 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:43.889657 8081 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:43.891104 8081 hybrid_clock.cc:648] HybridClock initialized: now 1754963683891062 us; error 60 us; skew 500 ppm
I20250812 01:54:43.891927 8081 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:43.898905 8081 webserver.cc:489] Webserver started at http://127.2.74.126:37101/ using document root <none> and password file <none>
I20250812 01:54:43.899878 8081 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:43.900096 8081 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:43.900568 8081 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:43.905099 8081 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/instance:
uuid: "2066d917881543b9a6aaa38cc9095805"
format_stamp: "Formatted at 2025-08-12 01:54:43 on dist-test-slave-3nxt"
I20250812 01:54:43.906256 8081 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal/instance:
uuid: "2066d917881543b9a6aaa38cc9095805"
format_stamp: "Formatted at 2025-08-12 01:54:43 on dist-test-slave-3nxt"
I20250812 01:54:43.914119 8081 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250812 01:54:43.919776 8098 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:43.920830 8081 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250812 01:54:43.921161 8081 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
uuid: "2066d917881543b9a6aaa38cc9095805"
format_stamp: "Formatted at 2025-08-12 01:54:43 on dist-test-slave-3nxt"
I20250812 01:54:43.921516 8081 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:43.984912 8081 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:43.986409 8081 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:43.986868 8081 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:44.058485 8081 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:41705
I20250812 01:54:44.058550 8149 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:41705 every 8 connection(s)
I20250812 01:54:44.061329 8081 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb
I20250812 01:54:44.066399 8150 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:44.070173 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8081
I20250812 01:54:44.070761 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal/instance
I20250812 01:54:44.091881 8150 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805: Bootstrap starting.
I20250812 01:54:44.098546 8150 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:44.100184 8150 log.cc:826] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:44.104921 8150 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805: No bootstrap required, opened a new log
I20250812 01:54:44.123014 8150 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2066d917881543b9a6aaa38cc9095805" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:54:44.123693 8150 raft_consensus.cc:383] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:44.123891 8150 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2066d917881543b9a6aaa38cc9095805, State: Initialized, Role: FOLLOWER
I20250812 01:54:44.124519 8150 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2066d917881543b9a6aaa38cc9095805" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:54:44.125044 8150 raft_consensus.cc:397] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:54:44.125281 8150 raft_consensus.cc:491] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:54:44.125546 8150 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:44.130436 8150 raft_consensus.cc:513] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2066d917881543b9a6aaa38cc9095805" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:54:44.131117 8150 leader_election.cc:304] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 2066d917881543b9a6aaa38cc9095805; no voters:
I20250812 01:54:44.132827 8150 leader_election.cc:290] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:54:44.133561 8155 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:44.135728 8155 raft_consensus.cc:695] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [term 1 LEADER]: Becoming Leader. State: Replica: 2066d917881543b9a6aaa38cc9095805, State: Running, Role: LEADER
I20250812 01:54:44.136512 8155 consensus_queue.cc:237] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2066d917881543b9a6aaa38cc9095805" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:54:44.137907 8150 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:54:44.146581 8156 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "2066d917881543b9a6aaa38cc9095805" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2066d917881543b9a6aaa38cc9095805" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } } }
I20250812 01:54:44.147549 8156 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:44.147185 8157 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 2066d917881543b9a6aaa38cc9095805. Latest consensus state: current_term: 1 leader_uuid: "2066d917881543b9a6aaa38cc9095805" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2066d917881543b9a6aaa38cc9095805" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } } }
I20250812 01:54:44.148038 8157 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805 [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:44.151779 8164 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:54:44.162477 8164 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:54:44.177964 8164 catalog_manager.cc:1349] Generated new cluster ID: af97828a5a83412b84b4c7b7a8c4366a
I20250812 01:54:44.178251 8164 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:54:44.213460 8164 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:54:44.214911 8164 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:54:44.229689 8164 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 2066d917881543b9a6aaa38cc9095805: Generated new TSK 0
I20250812 01:54:44.230568 8164 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250812 01:54:44.245764 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:0
--local_ip_for_outbound_sockets=127.2.74.65
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:41705
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250812 01:54:44.557314 8174 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:44.557828 8174 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:44.558351 8174 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:44.589762 8174 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:44.590659 8174 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:54:44.627137 8174 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=0
--tserver_master_addrs=127.2.74.126:41705
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:44.628490 8174 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:44.630121 8174 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:44.643121 8180 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:46.047008 8179 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 8174
W20250812 01:54:46.418150 8179 kernel_stack_watchdog.cc:198] Thread 8174 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:54:46.418977 8174 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.774s user 0.577s sys 1.123s
W20250812 01:54:46.419360 8174 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.775s user 0.577s sys 1.123s
W20250812 01:54:44.644261 8181 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:46.422076 8183 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:46.425169 8182 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1779 milliseconds
I20250812 01:54:46.425202 8174 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:46.426863 8174 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:46.429118 8174 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:46.430486 8174 hybrid_clock.cc:648] HybridClock initialized: now 1754963686430456 us; error 45 us; skew 500 ppm
I20250812 01:54:46.431288 8174 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:46.437609 8174 webserver.cc:489] Webserver started at http://127.2.74.65:33469/ using document root <none> and password file <none>
I20250812 01:54:46.438624 8174 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:46.438861 8174 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:46.439342 8174 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:46.443852 8174 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/instance:
uuid: "299abc19c2f44f32827dfd86b2119473"
format_stamp: "Formatted at 2025-08-12 01:54:46 on dist-test-slave-3nxt"
I20250812 01:54:46.445116 8174 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal/instance:
uuid: "299abc19c2f44f32827dfd86b2119473"
format_stamp: "Formatted at 2025-08-12 01:54:46 on dist-test-slave-3nxt"
I20250812 01:54:46.453433 8174 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.002s
I20250812 01:54:46.460103 8190 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:46.461462 8174 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.001s
I20250812 01:54:46.461808 8174 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
uuid: "299abc19c2f44f32827dfd86b2119473"
format_stamp: "Formatted at 2025-08-12 01:54:46 on dist-test-slave-3nxt"
I20250812 01:54:46.462155 8174 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:46.513828 8174 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:46.515340 8174 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:46.515829 8174 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:46.518996 8174 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:46.523602 8174 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:46.523855 8174 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:46.524102 8174 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:46.524261 8174 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:46.698580 8174 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:46441
I20250812 01:54:46.698748 8302 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:46441 every 8 connection(s)
I20250812 01:54:46.701205 8174 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb
I20250812 01:54:46.706205 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8174
I20250812 01:54:46.706856 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal/instance
I20250812 01:54:46.716864 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:0
--local_ip_for_outbound_sockets=127.2.74.66
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:41705
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250812 01:54:46.737979 8303 heartbeater.cc:344] Connected to a master server at 127.2.74.126:41705
I20250812 01:54:46.738466 8303 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:46.739629 8303 heartbeater.cc:507] Master 127.2.74.126:41705 requested a full tablet report, sending...
I20250812 01:54:46.742277 8115 ts_manager.cc:194] Registered new tserver with Master: 299abc19c2f44f32827dfd86b2119473 (127.2.74.65:46441)
I20250812 01:54:46.744259 8115 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:51095
W20250812 01:54:47.084270 8307 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:47.085120 8307 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:47.085649 8307 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:47.121304 8307 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:47.122192 8307 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:54:47.159369 8307 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=0
--tserver_master_addrs=127.2.74.126:41705
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:47.160758 8307 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:47.162472 8307 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:47.175032 8313 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:47.749039 8303 heartbeater.cc:499] Master 127.2.74.126:41705 was elected leader, sending a full tablet report...
W20250812 01:54:47.175832 8314 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:48.581866 8316 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:48.584261 8315 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1403 milliseconds
W20250812 01:54:48.584983 8307 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.409s user 0.473s sys 0.917s
W20250812 01:54:48.585264 8307 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.410s user 0.473s sys 0.917s
I20250812 01:54:48.585477 8307 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:48.586516 8307 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:48.589143 8307 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:48.590590 8307 hybrid_clock.cc:648] HybridClock initialized: now 1754963688590563 us; error 44 us; skew 500 ppm
I20250812 01:54:48.591377 8307 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:48.598708 8307 webserver.cc:489] Webserver started at http://127.2.74.66:39669/ using document root <none> and password file <none>
I20250812 01:54:48.599759 8307 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:48.599987 8307 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:48.600436 8307 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:48.604993 8307 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/instance:
uuid: "938e75c950ab49aaa60ab39338e88cf3"
format_stamp: "Formatted at 2025-08-12 01:54:48 on dist-test-slave-3nxt"
I20250812 01:54:48.606149 8307 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal/instance:
uuid: "938e75c950ab49aaa60ab39338e88cf3"
format_stamp: "Formatted at 2025-08-12 01:54:48 on dist-test-slave-3nxt"
I20250812 01:54:48.614111 8307 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250812 01:54:48.620035 8323 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:48.621351 8307 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250812 01:54:48.621714 8307 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
uuid: "938e75c950ab49aaa60ab39338e88cf3"
format_stamp: "Formatted at 2025-08-12 01:54:48 on dist-test-slave-3nxt"
I20250812 01:54:48.622049 8307 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:48.704829 8307 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:48.706650 8307 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:48.707226 8307 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:48.709865 8307 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:48.714037 8307 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:48.714286 8307 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:48.714542 8307 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:48.714707 8307 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:48.855546 8307 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:41359
I20250812 01:54:48.855674 8435 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:41359 every 8 connection(s)
I20250812 01:54:48.858186 8307 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb
I20250812 01:54:48.866753 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8307
I20250812 01:54:48.867341 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal/instance
I20250812 01:54:48.874501 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:0
--local_ip_for_outbound_sockets=127.2.74.67
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:41705
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250812 01:54:48.882498 8436 heartbeater.cc:344] Connected to a master server at 127.2.74.126:41705
I20250812 01:54:48.883023 8436 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:48.884068 8436 heartbeater.cc:507] Master 127.2.74.126:41705 requested a full tablet report, sending...
I20250812 01:54:48.886404 8115 ts_manager.cc:194] Registered new tserver with Master: 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359)
I20250812 01:54:48.887640 8115 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:48769
W20250812 01:54:49.199800 8440 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:49.200330 8440 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:54:49.200876 8440 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:49.235394 8440 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:54:49.236310 8440 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:54:49.274076 8440 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=0
--tserver_master_addrs=127.2.74.126:41705
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:54:49.275535 8440 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:49.277226 8440 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:49.291358 8446 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:49.891103 8436 heartbeater.cc:499] Master 127.2.74.126:41705 was elected leader, sending a full tablet report...
W20250812 01:54:49.291364 8447 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:50.485128 8449 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:50.487383 8448 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1191 milliseconds
I20250812 01:54:50.487501 8440 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:54:50.488791 8440 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:54:50.491003 8440 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:54:50.492389 8440 hybrid_clock.cc:648] HybridClock initialized: now 1754963690492348 us; error 62 us; skew 500 ppm
I20250812 01:54:50.493284 8440 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:50.500238 8440 webserver.cc:489] Webserver started at http://127.2.74.67:38123/ using document root <none> and password file <none>
I20250812 01:54:50.501338 8440 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:50.501540 8440 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:50.502038 8440 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:50.506632 8440 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/instance:
uuid: "9c83f2a853c1461e86625f2e1479f413"
format_stamp: "Formatted at 2025-08-12 01:54:50 on dist-test-slave-3nxt"
I20250812 01:54:50.507825 8440 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal/instance:
uuid: "9c83f2a853c1461e86625f2e1479f413"
format_stamp: "Formatted at 2025-08-12 01:54:50 on dist-test-slave-3nxt"
I20250812 01:54:50.515511 8440 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250812 01:54:50.521762 8456 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:50.523115 8440 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250812 01:54:50.523480 8440 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
uuid: "9c83f2a853c1461e86625f2e1479f413"
format_stamp: "Formatted at 2025-08-12 01:54:50 on dist-test-slave-3nxt"
I20250812 01:54:50.523847 8440 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:50.579815 8440 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:50.581396 8440 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:50.581859 8440 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:50.584537 8440 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:54:50.588852 8440 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250812 01:54:50.589092 8440 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:50.589354 8440 ts_tablet_manager.cc:610] Registered 0 tablets
I20250812 01:54:50.589517 8440 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:50.733526 8440 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:33653
I20250812 01:54:50.733677 8568 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:33653 every 8 connection(s)
I20250812 01:54:50.736109 8440 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb
I20250812 01:54:50.737294 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8440
I20250812 01:54:50.737887 2345 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal/instance
I20250812 01:54:50.758898 8569 heartbeater.cc:344] Connected to a master server at 127.2.74.126:41705
I20250812 01:54:50.759534 8569 heartbeater.cc:461] Registering TS with master...
I20250812 01:54:50.760686 8569 heartbeater.cc:507] Master 127.2.74.126:41705 requested a full tablet report, sending...
I20250812 01:54:50.762975 8114 ts_manager.cc:194] Registered new tserver with Master: 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653)
I20250812 01:54:50.764428 8114 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:52559
I20250812 01:54:50.775332 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:54:50.806685 2345 test_util.cc:276] Using random seed: 1398955400
I20250812 01:54:50.848187 8114 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:38758:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250812 01:54:50.850651 8114 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250812 01:54:50.918210 8371 tablet_service.cc:1468] Processing CreateTablet for tablet 960c593e29924de9b11e0c95828a091e (DEFAULT_TABLE table=pre_rebuild [id=36dd67fd1592463585beb664f6636459]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:50.920334 8371 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 960c593e29924de9b11e0c95828a091e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:50.925956 8504 tablet_service.cc:1468] Processing CreateTablet for tablet 960c593e29924de9b11e0c95828a091e (DEFAULT_TABLE table=pre_rebuild [id=36dd67fd1592463585beb664f6636459]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:50.928123 8504 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 960c593e29924de9b11e0c95828a091e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:50.927583 8238 tablet_service.cc:1468] Processing CreateTablet for tablet 960c593e29924de9b11e0c95828a091e (DEFAULT_TABLE table=pre_rebuild [id=36dd67fd1592463585beb664f6636459]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:54:50.929433 8238 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 960c593e29924de9b11e0c95828a091e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:50.951406 8594 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Bootstrap starting.
I20250812 01:54:50.952203 8593 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Bootstrap starting.
I20250812 01:54:50.957717 8595 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Bootstrap starting.
I20250812 01:54:50.959168 8594 tablet_bootstrap.cc:654] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:50.959210 8593 tablet_bootstrap.cc:654] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:50.961321 8594 log.cc:826] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:50.962031 8593 log.cc:826] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:50.966861 8595 tablet_bootstrap.cc:654] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:50.967010 8593 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: No bootstrap required, opened a new log
I20250812 01:54:50.967355 8594 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: No bootstrap required, opened a new log
I20250812 01:54:50.967471 8593 ts_tablet_manager.cc:1397] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Time spent bootstrapping tablet: real 0.019s user 0.002s sys 0.013s
I20250812 01:54:50.967875 8594 ts_tablet_manager.cc:1397] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Time spent bootstrapping tablet: real 0.017s user 0.013s sys 0.000s
I20250812 01:54:50.969256 8595 log.cc:826] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:50.978919 8595 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: No bootstrap required, opened a new log
I20250812 01:54:50.979460 8595 ts_tablet_manager.cc:1397] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Time spent bootstrapping tablet: real 0.022s user 0.014s sys 0.004s
I20250812 01:54:50.986773 8593 raft_consensus.cc:357] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:50.987982 8593 raft_consensus.cc:383] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:50.988333 8593 raft_consensus.cc:738] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 938e75c950ab49aaa60ab39338e88cf3, State: Initialized, Role: FOLLOWER
I20250812 01:54:50.989405 8593 consensus_queue.cc:260] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:50.994012 8593 ts_tablet_manager.cc:1428] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Time spent starting tablet: real 0.026s user 0.023s sys 0.002s
I20250812 01:54:50.995558 8594 raft_consensus.cc:357] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:50.996480 8594 raft_consensus.cc:383] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:50.996845 8594 raft_consensus.cc:738] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 299abc19c2f44f32827dfd86b2119473, State: Initialized, Role: FOLLOWER
I20250812 01:54:50.997766 8594 consensus_queue.cc:260] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:51.001606 8599 raft_consensus.cc:491] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:54:51.002120 8599 raft_consensus.cc:513] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:51.007347 8599 leader_election.cc:290] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653), 299abc19c2f44f32827dfd86b2119473 (127.2.74.65:46441)
I20250812 01:54:51.009004 8595 raft_consensus.cc:357] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:51.010224 8595 raft_consensus.cc:383] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:51.012002 8595 raft_consensus.cc:738] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9c83f2a853c1461e86625f2e1479f413, State: Initialized, Role: FOLLOWER
I20250812 01:54:51.014215 8595 consensus_queue.cc:260] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:51.018093 8594 ts_tablet_manager.cc:1428] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Time spent starting tablet: real 0.050s user 0.031s sys 0.014s
I20250812 01:54:51.022541 8569 heartbeater.cc:499] Master 127.2.74.126:41705 was elected leader, sending a full tablet report...
I20250812 01:54:51.023658 8595 ts_tablet_manager.cc:1428] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Time spent starting tablet: real 0.044s user 0.029s sys 0.011s
I20250812 01:54:51.027788 8524 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "938e75c950ab49aaa60ab39338e88cf3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9c83f2a853c1461e86625f2e1479f413" is_pre_election: true
I20250812 01:54:51.027880 8258 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "938e75c950ab49aaa60ab39338e88cf3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "299abc19c2f44f32827dfd86b2119473" is_pre_election: true
I20250812 01:54:51.028734 8524 raft_consensus.cc:2466] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 938e75c950ab49aaa60ab39338e88cf3 in term 0.
I20250812 01:54:51.028736 8258 raft_consensus.cc:2466] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 938e75c950ab49aaa60ab39338e88cf3 in term 0.
I20250812 01:54:51.030061 8324 leader_election.cc:304] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 938e75c950ab49aaa60ab39338e88cf3, 9c83f2a853c1461e86625f2e1479f413; no voters:
I20250812 01:54:51.030910 8599 raft_consensus.cc:2802] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:54:51.031191 8599 raft_consensus.cc:491] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:54:51.031456 8599 raft_consensus.cc:3058] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:51.036355 8599 raft_consensus.cc:513] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:51.037765 8599 leader_election.cc:290] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [CANDIDATE]: Term 1 election: Requested vote from peers 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653), 299abc19c2f44f32827dfd86b2119473 (127.2.74.65:46441)
I20250812 01:54:51.038456 8524 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "938e75c950ab49aaa60ab39338e88cf3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9c83f2a853c1461e86625f2e1479f413"
I20250812 01:54:51.038676 8258 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "938e75c950ab49aaa60ab39338e88cf3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "299abc19c2f44f32827dfd86b2119473"
I20250812 01:54:51.038892 8524 raft_consensus.cc:3058] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:51.039119 8258 raft_consensus.cc:3058] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:51.043397 8524 raft_consensus.cc:2466] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 938e75c950ab49aaa60ab39338e88cf3 in term 1.
I20250812 01:54:51.043408 8258 raft_consensus.cc:2466] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 938e75c950ab49aaa60ab39338e88cf3 in term 1.
I20250812 01:54:51.044245 8324 leader_election.cc:304] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 938e75c950ab49aaa60ab39338e88cf3, 9c83f2a853c1461e86625f2e1479f413; no voters:
I20250812 01:54:51.044875 8599 raft_consensus.cc:2802] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:51.046423 8599 raft_consensus.cc:695] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 LEADER]: Becoming Leader. State: Replica: 938e75c950ab49aaa60ab39338e88cf3, State: Running, Role: LEADER
I20250812 01:54:51.047292 8599 consensus_queue.cc:237] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:54:51.057353 8114 catalog_manager.cc:5582] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 reported cstate change: term changed from 0 to 1, leader changed from <none> to 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66). New cstate: current_term: 1 leader_uuid: "938e75c950ab49aaa60ab39338e88cf3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } health_report { overall_health: UNKNOWN } } }
W20250812 01:54:51.115615 8437 tablet.cc:2378] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250812 01:54:51.227331 8304 tablet.cc:2378] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:54:51.236245 8524 raft_consensus.cc:1273] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Refusing update from remote peer 938e75c950ab49aaa60ab39338e88cf3: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250812 01:54:51.236248 8258 raft_consensus.cc:1273] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Refusing update from remote peer 938e75c950ab49aaa60ab39338e88cf3: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250812 01:54:51.238220 8604 consensus_queue.cc:1035] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:54:51.238878 8599 consensus_queue.cc:1035] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250812 01:54:51.241621 8570 tablet.cc:2378] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:54:51.272401 8613 mvcc.cc:204] Tried to move back new op lower bound from 7188331279286358016 to 7188331278537093120. Current Snapshot: MvccSnapshot[applied={T|T < 7188331279286358016}]
W20250812 01:54:53.892947 8609 meta_cache.cc:1261] Time spent looking up entry by key: real 0.069s user 0.000s sys 0.002s
W20250812 01:54:53.892870 8608 meta_cache.cc:1261] Time spent looking up entry by key: real 0.067s user 0.014s sys 0.030s
I20250812 01:54:56.514005 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8081
W20250812 01:54:56.893813 8647 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:54:56.894449 8647 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:54:56.928869 8647 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250812 01:54:57.330140 8436 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:41705 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:41705: connect: Connection refused (error 111)
W20250812 01:54:57.333300 8569 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:41705 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:41705: connect: Connection refused (error 111)
W20250812 01:54:57.368535 8303 heartbeater.cc:646] Failed to heartbeat to 127.2.74.126:41705 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.2.74.126:41705: connect: Connection refused (error 111)
W20250812 01:54:58.289753 8647 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.319s user 0.505s sys 0.793s
W20250812 01:54:58.290138 8647 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.319s user 0.505s sys 0.793s
I20250812 01:54:58.423277 8647 minidump.cc:252] Setting minidump size limit to 20M
I20250812 01:54:58.425890 8647 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:54:58.427321 8647 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:54:58.440443 8681 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:54:58.442353 8682 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:58.610832 8647 server_base.cc:1047] running on GCE node
W20250812 01:54:58.610950 8684 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:54:58.612071 8647 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250812 01:54:58.612545 8647 hybrid_clock.cc:648] HybridClock initialized: now 1754963698612519 us; error 143516 us; skew 500 ppm
I20250812 01:54:58.613307 8647 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:54:58.617820 8647 webserver.cc:489] Webserver started at http://0.0.0.0:39461/ using document root <none> and password file <none>
I20250812 01:54:58.618676 8647 fs_manager.cc:362] Metadata directory not provided
I20250812 01:54:58.618916 8647 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:54:58.619329 8647 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250812 01:54:58.624624 8647 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/instance:
uuid: "720adeaa3f4246d5be6782d90cc5c760"
format_stamp: "Formatted at 2025-08-12 01:54:58 on dist-test-slave-3nxt"
I20250812 01:54:58.625749 8647 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal/instance:
uuid: "720adeaa3f4246d5be6782d90cc5c760"
format_stamp: "Formatted at 2025-08-12 01:54:58 on dist-test-slave-3nxt"
I20250812 01:54:58.632256 8647 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250812 01:54:58.637130 8690 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:54:58.638051 8647 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250812 01:54:58.638355 8647 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
uuid: "720adeaa3f4246d5be6782d90cc5c760"
format_stamp: "Formatted at 2025-08-12 01:54:58 on dist-test-slave-3nxt"
I20250812 01:54:58.638681 8647 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:54:58.744093 8647 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:54:58.745963 8647 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:54:58.746536 8647 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:54:58.759609 8647 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:54:58.774766 8647 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Bootstrap starting.
I20250812 01:54:58.779673 8647 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Neither blocks nor log segments found. Creating new log.
I20250812 01:54:58.781404 8647 log.cc:826] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Log is configured to *not* fsync() on all Append() calls
I20250812 01:54:58.786074 8647 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: No bootstrap required, opened a new log
I20250812 01:54:58.801964 8647 raft_consensus.cc:357] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER }
I20250812 01:54:58.802474 8647 raft_consensus.cc:383] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:54:58.802696 8647 raft_consensus.cc:738] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 720adeaa3f4246d5be6782d90cc5c760, State: Initialized, Role: FOLLOWER
I20250812 01:54:58.803304 8647 consensus_queue.cc:260] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER }
I20250812 01:54:58.803766 8647 raft_consensus.cc:397] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:54:58.804000 8647 raft_consensus.cc:491] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:54:58.804279 8647 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:54:58.808140 8647 raft_consensus.cc:513] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER }
I20250812 01:54:58.808812 8647 leader_election.cc:304] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 720adeaa3f4246d5be6782d90cc5c760; no voters:
I20250812 01:54:58.810580 8647 leader_election.cc:290] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250812 01:54:58.810938 8697 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:54:58.814255 8697 raft_consensus.cc:695] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 LEADER]: Becoming Leader. State: Replica: 720adeaa3f4246d5be6782d90cc5c760, State: Running, Role: LEADER
I20250812 01:54:58.815055 8697 consensus_queue.cc:237] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER }
I20250812 01:54:58.822247 8698 sys_catalog.cc:455] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "720adeaa3f4246d5be6782d90cc5c760" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER } }
I20250812 01:54:58.822927 8698 sys_catalog.cc:458] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:58.823926 8699 sys_catalog.cc:455] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 720adeaa3f4246d5be6782d90cc5c760. Latest consensus state: current_term: 1 leader_uuid: "720adeaa3f4246d5be6782d90cc5c760" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER } }
I20250812 01:54:58.824440 8699 sys_catalog.cc:458] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: This master's current role is: LEADER
I20250812 01:54:58.836679 8647 tablet_replica.cc:331] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: stopping tablet replica
I20250812 01:54:58.837203 8647 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 LEADER]: Raft consensus shutting down.
I20250812 01:54:58.837607 8647 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250812 01:54:58.839596 8647 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250812 01:54:58.840011 8647 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250812 01:54:58.867017 8647 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250812 01:54:59.899233 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8174
I20250812 01:54:59.936000 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8307
I20250812 01:54:59.976984 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8440
I20250812 01:55:00.014076 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:41705
--webserver_interface=127.2.74.126
--webserver_port=37101
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.2.74.126:41705 with env {}
W20250812 01:55:00.321130 8707 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:55:00.321745 8707 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:55:00.322213 8707 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:55:00.354609 8707 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250812 01:55:00.355003 8707 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:55:00.355262 8707 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250812 01:55:00.355512 8707 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250812 01:55:00.392020 8707 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.2.74.126:41705
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.2.74.126:41705
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.2.74.126
--webserver_port=37101
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:55:00.393414 8707 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:55:00.395049 8707 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:55:00.406029 8713 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:55:00.406490 8714 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:55:01.606556 8707 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.200s user 0.396s sys 0.799s
W20250812 01:55:01.606549 8715 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1199 milliseconds
W20250812 01:55:01.607043 8707 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.201s user 0.397s sys 0.799s
W20250812 01:55:01.607292 8716 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:55:01.607404 8707 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250812 01:55:01.608716 8707 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:55:01.611328 8707 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:55:01.612699 8707 hybrid_clock.cc:648] HybridClock initialized: now 1754963701612664 us; error 42 us; skew 500 ppm
I20250812 01:55:01.613495 8707 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:55:01.620965 8707 webserver.cc:489] Webserver started at http://127.2.74.126:37101/ using document root <none> and password file <none>
I20250812 01:55:01.622018 8707 fs_manager.cc:362] Metadata directory not provided
I20250812 01:55:01.622255 8707 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:55:01.630460 8707 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.000s sys 0.004s
I20250812 01:55:01.635399 8723 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:55:01.636780 8707 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:55:01.637140 8707 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
uuid: "720adeaa3f4246d5be6782d90cc5c760"
format_stamp: "Formatted at 2025-08-12 01:54:58 on dist-test-slave-3nxt"
I20250812 01:55:01.639158 8707 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:55:01.705211 8707 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:55:01.706686 8707 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:55:01.707113 8707 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:55:01.778364 8707 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.126:41705
I20250812 01:55:01.778467 8774 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.126:41705 every 8 connection(s)
I20250812 01:55:01.781240 8707 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb
I20250812 01:55:01.784652 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8707
I20250812 01:55:01.786079 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.65:46441
--local_ip_for_outbound_sockets=127.2.74.65
--tserver_master_addrs=127.2.74.126:41705
--webserver_port=33469
--webserver_interface=127.2.74.65
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250812 01:55:01.791599 8775 sys_catalog.cc:263] Verifying existing consensus state
I20250812 01:55:01.810508 8775 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Bootstrap starting.
I20250812 01:55:01.825786 8775 log.cc:826] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Log is configured to *not* fsync() on all Append() calls
I20250812 01:55:01.840891 8775 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:55:01.841989 8775 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Bootstrap complete.
I20250812 01:55:01.873208 8775 raft_consensus.cc:357] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:55:01.874101 8775 raft_consensus.cc:738] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 720adeaa3f4246d5be6782d90cc5c760, State: Initialized, Role: FOLLOWER
I20250812 01:55:01.875026 8775 consensus_queue.cc:260] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:55:01.875718 8775 raft_consensus.cc:397] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250812 01:55:01.876061 8775 raft_consensus.cc:491] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250812 01:55:01.876438 8775 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:55:01.882757 8775 raft_consensus.cc:513] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:55:01.883680 8775 leader_election.cc:304] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 720adeaa3f4246d5be6782d90cc5c760; no voters:
I20250812 01:55:01.885802 8775 leader_election.cc:290] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250812 01:55:01.886286 8779 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:55:01.890005 8779 raft_consensus.cc:695] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [term 2 LEADER]: Becoming Leader. State: Replica: 720adeaa3f4246d5be6782d90cc5c760, State: Running, Role: LEADER
I20250812 01:55:01.891043 8779 consensus_queue.cc:237] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } }
I20250812 01:55:01.891731 8775 sys_catalog.cc:564] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: configured and running, proceeding with master startup.
I20250812 01:55:01.899350 8781 sys_catalog.cc:455] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 720adeaa3f4246d5be6782d90cc5c760. Latest consensus state: current_term: 2 leader_uuid: "720adeaa3f4246d5be6782d90cc5c760" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } } }
I20250812 01:55:01.899343 8780 sys_catalog.cc:455] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "720adeaa3f4246d5be6782d90cc5c760" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "720adeaa3f4246d5be6782d90cc5c760" member_type: VOTER last_known_addr { host: "127.2.74.126" port: 41705 } } }
I20250812 01:55:01.900185 8780 sys_catalog.cc:458] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: This master's current role is: LEADER
I20250812 01:55:01.900185 8781 sys_catalog.cc:458] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760 [sys.catalog]: This master's current role is: LEADER
I20250812 01:55:01.911605 8785 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250812 01:55:01.928128 8785 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=af171ec38b614867bee71c21d2a992e4]
I20250812 01:55:01.935456 8785 tablet_loader.cc:96] loaded metadata for tablet 960c593e29924de9b11e0c95828a091e (table pre_rebuild [id=af171ec38b614867bee71c21d2a992e4])
I20250812 01:55:01.937088 8785 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250812 01:55:01.963177 8785 catalog_manager.cc:1349] Generated new cluster ID: 464186c312af4cc0962191b0984b218d
I20250812 01:55:01.963492 8785 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250812 01:55:01.995113 8785 catalog_manager.cc:1372] Generated new certificate authority record
I20250812 01:55:01.997192 8785 catalog_manager.cc:1506] Loading token signing keys...
I20250812 01:55:02.014416 8785 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Generated new TSK 0
I20250812 01:55:02.015632 8785 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250812 01:55:02.175860 8777 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:55:02.176389 8777 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:55:02.176932 8777 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:55:02.209169 8777 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:55:02.210062 8777 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.65
I20250812 01:55:02.246523 8777 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.65:46441
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.2.74.65
--webserver_port=33469
--tserver_master_addrs=127.2.74.126:41705
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.65
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:55:02.247905 8777 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:55:02.249491 8777 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:55:02.262310 8803 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:55:03.666054 8802 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 8777
W20250812 01:55:04.077569 8777 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.813s user 0.000s sys 0.003s
W20250812 01:55:04.077975 8777 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.814s user 0.000s sys 0.003s
W20250812 01:55:02.263427 8804 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:55:04.078608 8805 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1813 milliseconds
I20250812 01:55:04.080425 8777 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250812 01:55:04.080991 8807 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:55:04.083701 8777 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:55:04.086354 8777 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:55:04.087770 8777 hybrid_clock.cc:648] HybridClock initialized: now 1754963704087710 us; error 74 us; skew 500 ppm
I20250812 01:55:04.088568 8777 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:55:04.095490 8777 webserver.cc:489] Webserver started at http://127.2.74.65:33469/ using document root <none> and password file <none>
I20250812 01:55:04.096441 8777 fs_manager.cc:362] Metadata directory not provided
I20250812 01:55:04.096725 8777 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:55:04.105026 8777 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250812 01:55:04.109911 8814 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:55:04.111125 8777 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.000s sys 0.005s
I20250812 01:55:04.111443 8777 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
uuid: "299abc19c2f44f32827dfd86b2119473"
format_stamp: "Formatted at 2025-08-12 01:54:46 on dist-test-slave-3nxt"
I20250812 01:55:04.113458 8777 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:55:04.167925 8777 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:55:04.169427 8777 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:55:04.169876 8777 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:55:04.172973 8777 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:55:04.179369 8821 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250812 01:55:04.186997 8777 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250812 01:55:04.187280 8777 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.002s sys 0.001s
I20250812 01:55:04.187613 8777 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250812 01:55:04.192799 8777 ts_tablet_manager.cc:610] Registered 1 tablets
I20250812 01:55:04.193060 8777 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.006s sys 0.000s
I20250812 01:55:04.193439 8821 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Bootstrap starting.
I20250812 01:55:04.373736 8777 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.65:46441
I20250812 01:55:04.373885 8927 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.65:46441 every 8 connection(s)
I20250812 01:55:04.377549 8777 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb
I20250812 01:55:04.381752 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8777
I20250812 01:55:04.384078 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.66:41359
--local_ip_for_outbound_sockets=127.2.74.66
--tserver_master_addrs=127.2.74.126:41705
--webserver_port=39669
--webserver_interface=127.2.74.66
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250812 01:55:04.441334 8928 heartbeater.cc:344] Connected to a master server at 127.2.74.126:41705
I20250812 01:55:04.441851 8928 heartbeater.cc:461] Registering TS with master...
I20250812 01:55:04.443063 8928 heartbeater.cc:507] Master 127.2.74.126:41705 requested a full tablet report, sending...
I20250812 01:55:04.447793 8740 ts_manager.cc:194] Registered new tserver with Master: 299abc19c2f44f32827dfd86b2119473 (127.2.74.65:46441)
I20250812 01:55:04.455554 8740 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.65:60663
I20250812 01:55:04.510932 8821 log.cc:826] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Log is configured to *not* fsync() on all Append() calls
W20250812 01:55:04.869292 8930 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:55:04.869958 8930 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:55:04.870754 8930 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:55:04.930610 8930 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:55:04.932050 8930 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.66
I20250812 01:55:04.997006 8930 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.66:41359
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.2.74.66
--webserver_port=39669
--tserver_master_addrs=127.2.74.126:41705
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.66
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:55:04.998715 8930 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:55:05.000931 8930 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:55:05.016510 8939 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:55:05.460007 8928 heartbeater.cc:499] Master 127.2.74.126:41705 was elected leader, sending a full tablet report...
W20250812 01:55:06.418110 8938 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 8930
W20250812 01:55:05.017908 8940 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:55:06.672533 8938 kernel_stack_watchdog.cc:198] Thread 8930 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250812 01:55:06.673466 8930 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.658s user 0.604s sys 1.050s
W20250812 01:55:06.673897 8930 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.658s user 0.604s sys 1.050s
I20250812 01:55:06.681433 8930 server_base.cc:1047] running on GCE node
W20250812 01:55:06.682587 8944 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:55:06.683914 8930 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:55:06.686432 8930 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:55:06.687894 8930 hybrid_clock.cc:648] HybridClock initialized: now 1754963706687843 us; error 46 us; skew 500 ppm
I20250812 01:55:06.688972 8930 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:55:06.696401 8930 webserver.cc:489] Webserver started at http://127.2.74.66:39669/ using document root <none> and password file <none>
I20250812 01:55:06.697692 8930 fs_manager.cc:362] Metadata directory not provided
I20250812 01:55:06.697999 8930 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:55:06.699048 8821 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:55:06.699968 8821 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Bootstrap complete.
I20250812 01:55:06.701748 8821 ts_tablet_manager.cc:1397] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Time spent bootstrapping tablet: real 2.509s user 2.424s sys 0.076s
I20250812 01:55:06.708770 8930 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.002s sys 0.005s
I20250812 01:55:06.713752 8949 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:55:06.714911 8930 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250812 01:55:06.715291 8930 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
uuid: "938e75c950ab49aaa60ab39338e88cf3"
format_stamp: "Formatted at 2025-08-12 01:54:48 on dist-test-slave-3nxt"
I20250812 01:55:06.718068 8930 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:55:06.716861 8821 raft_consensus.cc:357] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:06.719969 8821 raft_consensus.cc:738] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 299abc19c2f44f32827dfd86b2119473, State: Initialized, Role: FOLLOWER
I20250812 01:55:06.720909 8821 consensus_queue.cc:260] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:06.724462 8821 ts_tablet_manager.cc:1428] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Time spent starting tablet: real 0.022s user 0.022s sys 0.000s
I20250812 01:55:06.766461 8930 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:55:06.767889 8930 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:55:06.768327 8930 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:55:06.770887 8930 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:55:06.776413 8957 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250812 01:55:06.783941 8930 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250812 01:55:06.784242 8930 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250812 01:55:06.784554 8930 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250812 01:55:06.791805 8930 ts_tablet_manager.cc:610] Registered 1 tablets
I20250812 01:55:06.792083 8930 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.005s sys 0.000s
I20250812 01:55:06.792452 8957 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Bootstrap starting.
I20250812 01:55:06.964520 8930 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.66:41359
I20250812 01:55:06.964802 9063 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.66:41359 every 8 connection(s)
I20250812 01:55:06.967610 8930 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb
I20250812 01:55:06.977708 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 8930
I20250812 01:55:06.979584 2345 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
/tmp/dist-test-taskk8keBh/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.2.74.67:33653
--local_ip_for_outbound_sockets=127.2.74.67
--tserver_master_addrs=127.2.74.126:41705
--webserver_port=38123
--webserver_interface=127.2.74.67
--builtin_ntp_servers=127.2.74.84:43717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250812 01:55:06.992877 9064 heartbeater.cc:344] Connected to a master server at 127.2.74.126:41705
I20250812 01:55:06.993307 9064 heartbeater.cc:461] Registering TS with master...
I20250812 01:55:06.994334 9064 heartbeater.cc:507] Master 127.2.74.126:41705 requested a full tablet report, sending...
I20250812 01:55:06.997546 8740 ts_manager.cc:194] Registered new tserver with Master: 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359)
I20250812 01:55:06.999661 8740 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.66:32905
I20250812 01:55:07.044804 8957 log.cc:826] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Log is configured to *not* fsync() on all Append() calls
W20250812 01:55:07.310547 9068 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250812 01:55:07.311043 9068 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250812 01:55:07.311534 9068 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250812 01:55:07.343132 9068 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250812 01:55:07.343991 9068 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.2.74.67
I20250812 01:55:07.381052 9068 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.2.74.84:43717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.2.74.67:33653
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.2.74.67
--webserver_port=38123
--tserver_master_addrs=127.2.74.126:41705
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.2.74.67
--log_dir=/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 69cd6ed63f1a62133b0bb021015c66e1516002dc
build type FASTDEBUG
built by None at 12 Aug 2025 01:43:22 UTC on 5fd53c4cbb9d
build id 7525
TSAN enabled
I20250812 01:55:07.382369 9068 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250812 01:55:07.383941 9068 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250812 01:55:07.397354 9075 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:55:08.002938 9064 heartbeater.cc:499] Master 127.2.74.126:41705 was elected leader, sending a full tablet report...
I20250812 01:55:08.218624 9081 raft_consensus.cc:491] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:55:08.219215 9081 raft_consensus.cc:513] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:08.222100 9081 leader_election.cc:290] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359), 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653)
W20250812 01:55:08.250620 8815 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.2.74.67:33653: connect: Connection refused (error 111)
W20250812 01:55:08.259698 8815 leader_election.cc:336] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653): Network error: Client connection negotiation failed: client connection to 127.2.74.67:33653: connect: Connection refused (error 111)
I20250812 01:55:08.258394 9019 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "938e75c950ab49aaa60ab39338e88cf3" is_pre_election: true
W20250812 01:55:08.267562 8815 leader_election.cc:343] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359): Illegal state: must be running to vote when last-logged opid is not known
I20250812 01:55:08.267938 8815 leader_election.cc:304] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 299abc19c2f44f32827dfd86b2119473; no voters: 938e75c950ab49aaa60ab39338e88cf3, 9c83f2a853c1461e86625f2e1479f413
I20250812 01:55:08.268606 9081 raft_consensus.cc:2747] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250812 01:55:07.402906 9068 server_base.cc:1047] running on GCE node
W20250812 01:55:07.403928 9078 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250812 01:55:07.399396 9076 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250812 01:55:08.595531 9068 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250812 01:55:08.598377 9068 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250812 01:55:08.599850 9068 hybrid_clock.cc:648] HybridClock initialized: now 1754963708599806 us; error 69 us; skew 500 ppm
I20250812 01:55:08.600693 9068 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250812 01:55:08.607650 9068 webserver.cc:489] Webserver started at http://127.2.74.67:38123/ using document root <none> and password file <none>
I20250812 01:55:08.608640 9068 fs_manager.cc:362] Metadata directory not provided
I20250812 01:55:08.608880 9068 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250812 01:55:08.617302 9068 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.007s sys 0.000s
I20250812 01:55:08.622435 9090 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250812 01:55:08.623687 9068 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250812 01:55:08.624099 9068 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data,/tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
uuid: "9c83f2a853c1461e86625f2e1479f413"
format_stamp: "Formatted at 2025-08-12 01:54:50 on dist-test-slave-3nxt"
I20250812 01:55:08.626853 9068 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250812 01:55:08.706547 9068 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250812 01:55:08.708510 9068 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250812 01:55:08.709136 9068 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250812 01:55:08.712375 9068 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250812 01:55:08.719859 9097 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250812 01:55:08.727473 9068 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250812 01:55:08.727794 9068 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.002s sys 0.001s
I20250812 01:55:08.728137 9068 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250812 01:55:08.735495 9068 ts_tablet_manager.cc:610] Registered 1 tablets
I20250812 01:55:08.735769 9068 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.005s sys 0.000s
I20250812 01:55:08.736140 9097 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Bootstrap starting.
I20250812 01:55:08.953864 9068 rpc_server.cc:307] RPC server started. Bound to: 127.2.74.67:33653
I20250812 01:55:08.954046 9203 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.2.74.67:33653 every 8 connection(s)
I20250812 01:55:08.957577 9068 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb
I20250812 01:55:08.965633 2345 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu as pid 9068
I20250812 01:55:09.028391 9204 heartbeater.cc:344] Connected to a master server at 127.2.74.126:41705
I20250812 01:55:09.029136 9204 heartbeater.cc:461] Registering TS with master...
I20250812 01:55:09.030978 9204 heartbeater.cc:507] Master 127.2.74.126:41705 requested a full tablet report, sending...
I20250812 01:55:09.035398 8740 ts_manager.cc:194] Registered new tserver with Master: 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653)
I20250812 01:55:09.037588 8740 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.2.74.67:42317
I20250812 01:55:09.047721 2345 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250812 01:55:09.058350 9097 log.cc:826] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Log is configured to *not* fsync() on all Append() calls
I20250812 01:55:09.334107 8957 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:55:09.334909 8957 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Bootstrap complete.
I20250812 01:55:09.336210 8957 ts_tablet_manager.cc:1397] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Time spent bootstrapping tablet: real 2.544s user 2.456s sys 0.040s
I20250812 01:55:09.341375 8957 raft_consensus.cc:357] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:09.343325 8957 raft_consensus.cc:738] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 938e75c950ab49aaa60ab39338e88cf3, State: Initialized, Role: FOLLOWER
I20250812 01:55:09.344040 8957 consensus_queue.cc:260] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:09.347079 8957 ts_tablet_manager.cc:1428] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Time spent starting tablet: real 0.011s user 0.013s sys 0.000s
I20250812 01:55:10.040783 9204 heartbeater.cc:499] Master 127.2.74.126:41705 was elected leader, sending a full tablet report...
I20250812 01:55:10.439055 9218 raft_consensus.cc:491] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:55:10.439441 9218 raft_consensus.cc:513] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:10.440987 9218 leader_election.cc:290] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359), 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653)
I20250812 01:55:10.441833 9019 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "938e75c950ab49aaa60ab39338e88cf3" is_pre_election: true
I20250812 01:55:10.442536 9019 raft_consensus.cc:2466] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 299abc19c2f44f32827dfd86b2119473 in term 1.
I20250812 01:55:10.445120 8815 leader_election.cc:304] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 299abc19c2f44f32827dfd86b2119473, 938e75c950ab49aaa60ab39338e88cf3; no voters:
I20250812 01:55:10.445883 9218 raft_consensus.cc:2802] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250812 01:55:10.446146 9218 raft_consensus.cc:491] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:55:10.446400 9218 raft_consensus.cc:3058] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:55:10.454452 9218 raft_consensus.cc:513] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:10.456908 9019 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "938e75c950ab49aaa60ab39338e88cf3"
I20250812 01:55:10.457248 9218 leader_election.cc:290] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 election: Requested vote from peers 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359), 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653)
I20250812 01:55:10.457448 9019 raft_consensus.cc:3058] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Advancing to term 2
I20250812 01:55:10.456058 9159 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "9c83f2a853c1461e86625f2e1479f413" is_pre_election: true
I20250812 01:55:10.458412 9158 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "960c593e29924de9b11e0c95828a091e" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "9c83f2a853c1461e86625f2e1479f413"
W20250812 01:55:10.465997 8815 leader_election.cc:343] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653): Illegal state: must be running to vote when last-logged opid is not known
I20250812 01:55:10.466339 9019 raft_consensus.cc:2466] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 299abc19c2f44f32827dfd86b2119473 in term 2.
W20250812 01:55:10.466882 8815 leader_election.cc:343] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653): Illegal state: must be running to vote when last-logged opid is not known
I20250812 01:55:10.467629 8815 leader_election.cc:304] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 299abc19c2f44f32827dfd86b2119473, 938e75c950ab49aaa60ab39338e88cf3; no voters: 9c83f2a853c1461e86625f2e1479f413
I20250812 01:55:10.468233 9218 raft_consensus.cc:2802] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 FOLLOWER]: Leader election won for term 2
I20250812 01:55:10.469775 9218 raft_consensus.cc:695] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 LEADER]: Becoming Leader. State: Replica: 299abc19c2f44f32827dfd86b2119473, State: Running, Role: LEADER
I20250812 01:55:10.470719 9218 consensus_queue.cc:237] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:10.482563 8740 catalog_manager.cc:5582] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 reported cstate change: term changed from 0 to 2, leader changed from <none> to 299abc19c2f44f32827dfd86b2119473 (127.2.74.65), VOTER 299abc19c2f44f32827dfd86b2119473 (127.2.74.65) added, VOTER 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66) added, VOTER 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67) added. New cstate: current_term: 2 leader_uuid: "299abc19c2f44f32827dfd86b2119473" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } health_report { overall_health: HEALTHY } } }
W20250812 01:55:10.967130 2345 scanner-internal.cc:458] Time spent opening tablet: real 1.888s user 0.004s sys 0.003s
I20250812 01:55:10.977594 9019 raft_consensus.cc:1273] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 2 FOLLOWER]: Refusing update from remote peer 299abc19c2f44f32827dfd86b2119473: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250812 01:55:10.979456 9218 consensus_queue.cc:1035] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Connected to new peer: Peer: permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
W20250812 01:55:11.060904 8815 consensus_peers.cc:489] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 -> Peer 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653): Couldn't send request to peer 9c83f2a853c1461e86625f2e1479f413. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
I20250812 01:55:11.101182 8883 consensus_queue.cc:237] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:11.108273 9019 raft_consensus.cc:1273] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 2 FOLLOWER]: Refusing update from remote peer 299abc19c2f44f32827dfd86b2119473: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250812 01:55:11.109827 9218 consensus_queue.cc:1035] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Connected to new peer: Peer: permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20250812 01:55:11.116930 9218 raft_consensus.cc:2953] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } } }
I20250812 01:55:11.119481 9019 raft_consensus.cc:2953] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } } }
I20250812 01:55:11.137004 8727 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 960c593e29924de9b11e0c95828a091e with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250812 01:55:11.142009 8739 catalog_manager.cc:5582] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 reported cstate change: config changed from index -1 to 207, VOTER 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67) evicted. New cstate: current_term: 2 leader_uuid: "299abc19c2f44f32827dfd86b2119473" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } health_report { overall_health: HEALTHY } } }
I20250812 01:55:11.157001 9097 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250812 01:55:11.158365 9097 tablet_bootstrap.cc:492] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Bootstrap complete.
I20250812 01:55:11.160218 9097 ts_tablet_manager.cc:1397] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Time spent bootstrapping tablet: real 2.425s user 2.307s sys 0.071s
I20250812 01:55:11.169816 9097 raft_consensus.cc:357] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:11.174044 9097 raft_consensus.cc:738] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9c83f2a853c1461e86625f2e1479f413, State: Initialized, Role: FOLLOWER
I20250812 01:55:11.175081 9097 consensus_queue.cc:260] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:11.181723 9097 ts_tablet_manager.cc:1428] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Time spent starting tablet: real 0.021s user 0.012s sys 0.003s
I20250812 01:55:11.186483 8883 consensus_queue.cc:237] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:11.198832 9218 raft_consensus.cc:2953] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } } }
I20250812 01:55:11.207059 9139 tablet_service.cc:1515] Processing DeleteTablet for tablet 960c593e29924de9b11e0c95828a091e with delete_type TABLET_DATA_TOMBSTONED (TS 9c83f2a853c1461e86625f2e1479f413 not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:55114
I20250812 01:55:11.210422 8727 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 960c593e29924de9b11e0c95828a091e with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250812 01:55:11.215345 9242 tablet_replica.cc:331] stopping tablet replica
I20250812 01:55:11.214990 8738 catalog_manager.cc:5582] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 reported cstate change: config changed from index 207 to 208, VOTER 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66) evicted. New cstate: current_term: 2 leader_uuid: "299abc19c2f44f32827dfd86b2119473" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } health_report { overall_health: HEALTHY } } }
I20250812 01:55:11.216197 9242 raft_consensus.cc:2241] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250812 01:55:11.216735 9242 raft_consensus.cc:2270] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250812 01:55:11.239675 9242 ts_tablet_manager.cc:1905] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250812 01:55:11.248260 8999 tablet_service.cc:1515] Processing DeleteTablet for tablet 960c593e29924de9b11e0c95828a091e with delete_type TABLET_DATA_TOMBSTONED (TS 938e75c950ab49aaa60ab39338e88cf3 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:33606
I20250812 01:55:11.255429 9244 tablet_replica.cc:331] stopping tablet replica
I20250812 01:55:11.256294 9244 raft_consensus.cc:2241] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250812 01:55:11.257025 9244 raft_consensus.cc:2270] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250812 01:55:11.258690 9242 ts_tablet_manager.cc:1918] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.205
I20250812 01:55:11.259135 9242 log.cc:1199] T 960c593e29924de9b11e0c95828a091e P 9c83f2a853c1461e86625f2e1479f413: Deleting WAL directory at /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/wal/wals/960c593e29924de9b11e0c95828a091e
I20250812 01:55:11.261022 8724 catalog_manager.cc:4928] TS 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653): tablet 960c593e29924de9b11e0c95828a091e (table pre_rebuild [id=af171ec38b614867bee71c21d2a992e4]) successfully deleted
I20250812 01:55:11.279778 9244 ts_tablet_manager.cc:1905] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250812 01:55:11.295627 9244 ts_tablet_manager.cc:1918] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250812 01:55:11.296063 9244 log.cc:1199] T 960c593e29924de9b11e0c95828a091e P 938e75c950ab49aaa60ab39338e88cf3: Deleting WAL directory at /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/wal/wals/960c593e29924de9b11e0c95828a091e
I20250812 01:55:11.297948 8724 catalog_manager.cc:4928] TS 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359): tablet 960c593e29924de9b11e0c95828a091e (table pre_rebuild [id=af171ec38b614867bee71c21d2a992e4]) successfully deleted
I20250812 01:55:11.687325 8999 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:55:11.697647 9139 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:55:11.697764 8863 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+--------------------+---------
720adeaa3f4246d5be6782d90cc5c760 | 127.2.74.126:41705 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.74.84:43717 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
299abc19c2f44f32827dfd86b2119473 | 127.2.74.65:46441 | HEALTHY | <none> | 1 | 0
938e75c950ab49aaa60ab39338e88cf3 | 127.2.74.66:41359 | HEALTHY | <none> | 0 | 0
9c83f2a853c1461e86625f2e1479f413 | 127.2.74.67:33653 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.2.74.65 | experimental | 127.2.74.65:46441
local_ip_for_outbound_sockets | 127.2.74.66 | experimental | 127.2.74.66:41359
local_ip_for_outbound_sockets | 127.2.74.67 | experimental | 127.2.74.67:33653
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb | hidden | 127.2.74.65:46441
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb | hidden | 127.2.74.66:41359
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb | hidden | 127.2.74.67:33653
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.74.84:43717 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250812 01:55:12.002655 2345 log_verifier.cc:126] Checking tablet 960c593e29924de9b11e0c95828a091e
I20250812 01:55:12.281888 2345 log_verifier.cc:177] Verified matching terms for 208 ops in tablet 960c593e29924de9b11e0c95828a091e
I20250812 01:55:12.284243 8739 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:50158:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250812 01:55:12.284868 8739 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:50158:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250812 01:55:12.298208 8739 catalog_manager.cc:5869] T 00000000000000000000000000000000 P 720adeaa3f4246d5be6782d90cc5c760: Sending DeleteTablet for 1 replicas of tablet 960c593e29924de9b11e0c95828a091e
I20250812 01:55:12.300122 2345 test_util.cc:276] Using random seed: 1420448829
I20250812 01:55:12.300978 8863 tablet_service.cc:1515] Processing DeleteTablet for tablet 960c593e29924de9b11e0c95828a091e with delete_type TABLET_DATA_DELETED (Table deleted at 2025-08-12 01:55:12 UTC) from {username='slave'} at 127.0.0.1:51472
I20250812 01:55:12.303397 9275 tablet_replica.cc:331] stopping tablet replica
I20250812 01:55:12.304423 9275 raft_consensus.cc:2241] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 LEADER]: Raft consensus shutting down.
I20250812 01:55:12.305663 9275 raft_consensus.cc:2270] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250812 01:55:12.338858 8739 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:50180:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250812 01:55:12.341396 8739 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250812 01:55:12.353144 9275 ts_tablet_manager.cc:1905] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250812 01:55:12.365789 9139 tablet_service.cc:1468] Processing CreateTablet for tablet 8f5a1341811f4647967ccef95758a27a (DEFAULT_TABLE table=post_rebuild [id=1008acc3212b419ca07b982bbf22dd21]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:55:12.366056 8999 tablet_service.cc:1468] Processing CreateTablet for tablet 8f5a1341811f4647967ccef95758a27a (DEFAULT_TABLE table=post_rebuild [id=1008acc3212b419ca07b982bbf22dd21]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:55:12.367202 8999 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8f5a1341811f4647967ccef95758a27a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:55:12.367183 9139 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8f5a1341811f4647967ccef95758a27a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:55:12.372057 9275 ts_tablet_manager.cc:1918] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250812 01:55:12.372639 9275 log.cc:1199] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Deleting WAL directory at /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/wal/wals/960c593e29924de9b11e0c95828a091e
I20250812 01:55:12.373842 9275 ts_tablet_manager.cc:1939] T 960c593e29924de9b11e0c95828a091e P 299abc19c2f44f32827dfd86b2119473: Deleting consensus metadata
I20250812 01:55:12.377161 8863 tablet_service.cc:1468] Processing CreateTablet for tablet 8f5a1341811f4647967ccef95758a27a (DEFAULT_TABLE table=post_rebuild [id=1008acc3212b419ca07b982bbf22dd21]), partition=RANGE (key) PARTITION UNBOUNDED
I20250812 01:55:12.378772 8863 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8f5a1341811f4647967ccef95758a27a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250812 01:55:12.381039 8727 catalog_manager.cc:4928] TS 299abc19c2f44f32827dfd86b2119473 (127.2.74.65:46441): tablet 960c593e29924de9b11e0c95828a091e (table pre_rebuild [id=af171ec38b614867bee71c21d2a992e4]) successfully deleted
I20250812 01:55:12.403545 9282 tablet_bootstrap.cc:492] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3: Bootstrap starting.
I20250812 01:55:12.409801 9283 tablet_bootstrap.cc:492] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473: Bootstrap starting.
I20250812 01:55:12.416322 9284 tablet_bootstrap.cc:492] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413: Bootstrap starting.
I20250812 01:55:12.418013 9282 tablet_bootstrap.cc:654] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3: Neither blocks nor log segments found. Creating new log.
I20250812 01:55:12.419677 9283 tablet_bootstrap.cc:654] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473: Neither blocks nor log segments found. Creating new log.
I20250812 01:55:12.429734 9284 tablet_bootstrap.cc:654] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413: Neither blocks nor log segments found. Creating new log.
I20250812 01:55:12.432531 9283 tablet_bootstrap.cc:492] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473: No bootstrap required, opened a new log
I20250812 01:55:12.433073 9283 ts_tablet_manager.cc:1397] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473: Time spent bootstrapping tablet: real 0.024s user 0.012s sys 0.007s
I20250812 01:55:12.436502 9283 raft_consensus.cc:357] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.437901 9283 raft_consensus.cc:383] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:55:12.438369 9283 raft_consensus.cc:738] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 299abc19c2f44f32827dfd86b2119473, State: Initialized, Role: FOLLOWER
I20250812 01:55:12.439469 9283 consensus_queue.cc:260] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.448184 9284 tablet_bootstrap.cc:492] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413: No bootstrap required, opened a new log
I20250812 01:55:12.448792 9284 ts_tablet_manager.cc:1397] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413: Time spent bootstrapping tablet: real 0.033s user 0.015s sys 0.004s
I20250812 01:55:12.449234 9283 ts_tablet_manager.cc:1428] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473: Time spent starting tablet: real 0.016s user 0.013s sys 0.002s
I20250812 01:55:12.451733 9282 tablet_bootstrap.cc:492] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3: No bootstrap required, opened a new log
I20250812 01:55:12.452169 9282 ts_tablet_manager.cc:1397] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3: Time spent bootstrapping tablet: real 0.049s user 0.009s sys 0.010s
I20250812 01:55:12.454854 9282 raft_consensus.cc:357] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.455549 9282 raft_consensus.cc:383] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:55:12.451543 9284 raft_consensus.cc:357] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.455837 9282 raft_consensus.cc:738] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 938e75c950ab49aaa60ab39338e88cf3, State: Initialized, Role: FOLLOWER
I20250812 01:55:12.455988 9284 raft_consensus.cc:383] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250812 01:55:12.456281 9284 raft_consensus.cc:738] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9c83f2a853c1461e86625f2e1479f413, State: Initialized, Role: FOLLOWER
I20250812 01:55:12.456805 9282 consensus_queue.cc:260] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.457013 9284 consensus_queue.cc:260] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.466441 9284 ts_tablet_manager.cc:1428] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413: Time spent starting tablet: real 0.017s user 0.005s sys 0.007s
I20250812 01:55:12.468132 9282 ts_tablet_manager.cc:1428] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3: Time spent starting tablet: real 0.016s user 0.000s sys 0.014s
W20250812 01:55:12.483858 9065 tablet.cc:2378] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:55:12.495110 9288 raft_consensus.cc:491] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250812 01:55:12.495656 9288 raft_consensus.cc:513] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.497864 9288 leader_election.cc:290] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653), 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359)
I20250812 01:55:12.498682 9159 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8f5a1341811f4647967ccef95758a27a" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9c83f2a853c1461e86625f2e1479f413" is_pre_election: true
I20250812 01:55:12.498941 9019 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8f5a1341811f4647967ccef95758a27a" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "938e75c950ab49aaa60ab39338e88cf3" is_pre_election: true
I20250812 01:55:12.499419 9159 raft_consensus.cc:2466] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 299abc19c2f44f32827dfd86b2119473 in term 0.
I20250812 01:55:12.499500 9019 raft_consensus.cc:2466] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 299abc19c2f44f32827dfd86b2119473 in term 0.
I20250812 01:55:12.500715 8815 leader_election.cc:304] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 299abc19c2f44f32827dfd86b2119473, 938e75c950ab49aaa60ab39338e88cf3; no voters:
I20250812 01:55:12.501658 9288 raft_consensus.cc:2802] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250812 01:55:12.501955 9288 raft_consensus.cc:491] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250812 01:55:12.502277 9288 raft_consensus.cc:3058] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:55:12.506771 9288 raft_consensus.cc:513] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.508173 9288 leader_election.cc:290] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 1 election: Requested vote from peers 9c83f2a853c1461e86625f2e1479f413 (127.2.74.67:33653), 938e75c950ab49aaa60ab39338e88cf3 (127.2.74.66:41359)
I20250812 01:55:12.509080 9159 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8f5a1341811f4647967ccef95758a27a" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9c83f2a853c1461e86625f2e1479f413"
I20250812 01:55:12.509521 9159 raft_consensus.cc:3058] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 0 FOLLOWER]: Advancing to term 1
I20250812 01:55:12.509348 9019 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8f5a1341811f4647967ccef95758a27a" candidate_uuid: "299abc19c2f44f32827dfd86b2119473" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "938e75c950ab49aaa60ab39338e88cf3"
I20250812 01:55:12.509902 9019 raft_consensus.cc:3058] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 0 FOLLOWER]: Advancing to term 1
W20250812 01:55:12.512480 9207 tablet.cc:2378] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:55:12.514622 9159 raft_consensus.cc:2466] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 299abc19c2f44f32827dfd86b2119473 in term 1.
I20250812 01:55:12.515514 8815 leader_election.cc:304] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 299abc19c2f44f32827dfd86b2119473, 9c83f2a853c1461e86625f2e1479f413; no voters:
I20250812 01:55:12.516158 9288 raft_consensus.cc:2802] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 1 FOLLOWER]: Leader election won for term 1
I20250812 01:55:12.516366 9019 raft_consensus.cc:2466] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 299abc19c2f44f32827dfd86b2119473 in term 1.
I20250812 01:55:12.517457 9288 raft_consensus.cc:695] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [term 1 LEADER]: Becoming Leader. State: Replica: 299abc19c2f44f32827dfd86b2119473, State: Running, Role: LEADER
I20250812 01:55:12.518594 9288 consensus_queue.cc:237] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } }
I20250812 01:55:12.526762 8738 catalog_manager.cc:5582] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 reported cstate change: term changed from 0 to 1, leader changed from <none> to 299abc19c2f44f32827dfd86b2119473 (127.2.74.65). New cstate: current_term: 1 leader_uuid: "299abc19c2f44f32827dfd86b2119473" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "299abc19c2f44f32827dfd86b2119473" member_type: VOTER last_known_addr { host: "127.2.74.65" port: 46441 } health_report { overall_health: HEALTHY } } }
W20250812 01:55:12.677870 8929 tablet.cc:2378] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250812 01:55:12.729132 9019 raft_consensus.cc:1273] T 8f5a1341811f4647967ccef95758a27a P 938e75c950ab49aaa60ab39338e88cf3 [term 1 FOLLOWER]: Refusing update from remote peer 299abc19c2f44f32827dfd86b2119473: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250812 01:55:12.729586 9159 raft_consensus.cc:1273] T 8f5a1341811f4647967ccef95758a27a P 9c83f2a853c1461e86625f2e1479f413 [term 1 FOLLOWER]: Refusing update from remote peer 299abc19c2f44f32827dfd86b2119473: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250812 01:55:12.731020 9291 consensus_queue.cc:1035] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Connected to new peer: Peer: permanent_uuid: "938e75c950ab49aaa60ab39338e88cf3" member_type: VOTER last_known_addr { host: "127.2.74.66" port: 41359 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250812 01:55:12.731715 9288 consensus_queue.cc:1035] T 8f5a1341811f4647967ccef95758a27a P 299abc19c2f44f32827dfd86b2119473 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9c83f2a853c1461e86625f2e1479f413" member_type: VOTER last_known_addr { host: "127.2.74.67" port: 33653 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250812 01:55:17.858520 8863 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250812 01:55:17.860248 8999 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250812 01:55:17.868395 9139 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+--------------------+---------
720adeaa3f4246d5be6782d90cc5c760 | 127.2.74.126:41705 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.74.84:43717 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
299abc19c2f44f32827dfd86b2119473 | 127.2.74.65:46441 | HEALTHY | <none> | 1 | 0
938e75c950ab49aaa60ab39338e88cf3 | 127.2.74.66:41359 | HEALTHY | <none> | 0 | 0
9c83f2a853c1461e86625f2e1479f413 | 127.2.74.67:33653 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.2.74.65 | experimental | 127.2.74.65:46441
local_ip_for_outbound_sockets | 127.2.74.66 | experimental | 127.2.74.66:41359
local_ip_for_outbound_sockets | 127.2.74.67 | experimental | 127.2.74.67:33653
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-0/data/info.pb | hidden | 127.2.74.65:46441
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-1/data/info.pb | hidden | 127.2.74.66:41359
server_dump_info_path | /tmp/dist-test-taskk8keBh/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754963523148890-2345-0/minicluster-data/ts-2/data/info.pb | hidden | 127.2.74.67:33653
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+-------------------+-------------------------
builtin_ntp_servers | 127.2.74.84:43717 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250812 01:55:18.120735 2345 log_verifier.cc:126] Checking tablet 8f5a1341811f4647967ccef95758a27a
I20250812 01:55:18.886989 2345 log_verifier.cc:177] Verified matching terms for 205 ops in tablet 8f5a1341811f4647967ccef95758a27a
I20250812 01:55:18.887799 2345 log_verifier.cc:126] Checking tablet 960c593e29924de9b11e0c95828a091e
I20250812 01:55:18.888060 2345 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 960c593e29924de9b11e0c95828a091e
I20250812 01:55:18.913190 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8777
I20250812 01:55:18.960258 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8930
I20250812 01:55:19.013293 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 9068
I20250812 01:55:19.044322 2345 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskk8keBh/build/tsan/bin/kudu with pid 8707
2025-08-12T01:55:19Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (36851 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (36851 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (195882 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST
I20250812 01:55:19.110718 2345 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/meta_cache.cc:302: suppressed but not reported on 4 messages since previous log ~51 seconds ago