[==========] Running 21 tests from 3 test suites.
[----------] Global test environment set-up.
[----------] 17 tests from TxnCommitITest
[ RUN ] TxnCommitITest.TestBasicCommits
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20251025 14:08:19.040226 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:32837
I20251025 14:08:19.040548 31499 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20251025 14:08:19.040711 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20251025 14:08:19.044642 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:19.044654 31504 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:19.044634 31505 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:19.044634 31507 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:19.045076 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:19.045125 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:19.045157 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401299045158 us; error 0 us; skew 500 ppm
I20251025 14:08:19.046154 31499 webserver.cc:492] Webserver started at http://127.30.194.254:39911/ using document root <none> and password file <none>
I20251025 14:08:19.046306 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:19.046356 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:19.046448 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:19.047477 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "2a1f67557ec14b329091094b5225cb62"
format_stamp: "Formatted at 2025-10-25 14:08:19 on dist-test-slave-v4l2"
I20251025 14:08:19.048698 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.002s
I20251025 14:08:19.049295 31512 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:19.049475 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:19.049544 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "2a1f67557ec14b329091094b5225cb62"
format_stamp: "Formatted at 2025-10-25 14:08:19 on dist-test-slave-v4l2"
I20251025 14:08:19.049604 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:19.065272 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:19.065539 31499 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20251025 14:08:19.065649 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:19.070335 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:32837
I20251025 14:08:19.070397 31574 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:32837 every 8 connection(s)
I20251025 14:08:19.070967 31575 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:19.073383 31575 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62: Bootstrap starting.
I20251025 14:08:19.074131 31575 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:19.074379 31575 log.cc:826] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62: Log is configured to *not* fsync() on all Append() calls
I20251025 14:08:19.074898 31575 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62: No bootstrap required, opened a new log
I20251025 14:08:19.075990 31575 raft_consensus.cc:359] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2a1f67557ec14b329091094b5225cb62" member_type: VOTER }
I20251025 14:08:19.076079 31575 raft_consensus.cc:385] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:19.076110 31575 raft_consensus.cc:740] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2a1f67557ec14b329091094b5225cb62, State: Initialized, Role: FOLLOWER
I20251025 14:08:19.076242 31575 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2a1f67557ec14b329091094b5225cb62" member_type: VOTER }
I20251025 14:08:19.076296 31575 raft_consensus.cc:399] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:19.076319 31575 raft_consensus.cc:493] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:19.076365 31575 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:19.076828 31575 raft_consensus.cc:515] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2a1f67557ec14b329091094b5225cb62" member_type: VOTER }
I20251025 14:08:19.076925 31575 leader_election.cc:304] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 2a1f67557ec14b329091094b5225cb62; no voters:
I20251025 14:08:19.077106 31575 leader_election.cc:290] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:19.077163 31578 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:19.077303 31578 raft_consensus.cc:697] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 1 LEADER]: Becoming Leader. State: Replica: 2a1f67557ec14b329091094b5225cb62, State: Running, Role: LEADER
I20251025 14:08:19.077342 31575 sys_catalog.cc:565] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:19.077478 31578 consensus_queue.cc:237] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2a1f67557ec14b329091094b5225cb62" member_type: VOTER }
I20251025 14:08:19.077826 31579 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "2a1f67557ec14b329091094b5225cb62" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2a1f67557ec14b329091094b5225cb62" member_type: VOTER } }
I20251025 14:08:19.077852 31580 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 2a1f67557ec14b329091094b5225cb62. Latest consensus state: current_term: 1 leader_uuid: "2a1f67557ec14b329091094b5225cb62" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2a1f67557ec14b329091094b5225cb62" member_type: VOTER } }
I20251025 14:08:19.077963 31579 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:19.077967 31580 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:19.078348 31587 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:19.078754 31587 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:19.078866 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:19.080134 31587 catalog_manager.cc:1357] Generated new cluster ID: f9290fb1fd4f478198306442f3a7fabf
I20251025 14:08:19.080184 31587 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:19.089082 31587 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:19.089545 31587 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:19.093281 31587 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62: Generated new TSK 0
I20251025 14:08:19.093428 31587 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:19.094703 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:19.095927 31603 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:19.096153 31604 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:19.096287 31606 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:19.097141 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:19.097266 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:19.097309 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:19.097327 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401299097327 us; error 0 us; skew 500 ppm
I20251025 14:08:19.097887 31499 webserver.cc:492] Webserver started at http://127.30.194.193:41327/ using document root <none> and password file <none>
I20251025 14:08:19.097977 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:19.098033 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:19.098089 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:19.098342 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "c74e3662f4cd4607b3780a3f27cbd077"
format_stamp: "Formatted at 2025-10-25 14:08:19 on dist-test-slave-v4l2"
I20251025 14:08:19.099387 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.001s
I20251025 14:08:19.099941 31611 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:19.100062 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:19.100100 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "c74e3662f4cd4607b3780a3f27cbd077"
format_stamp: "Formatted at 2025-10-25 14:08:19 on dist-test-slave-v4l2"
I20251025 14:08:19.100147 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicCommits.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:19.099475 31529 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59664:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
W20251025 14:08:19.101362 31575 master.cc:464] Invalid argument: unable to initialize TxnManager: Error creating table kudu_system.kudu_transactions on the master: not enough live tablet servers to create a table with the requested replication factor 1; 0 tablet servers are alive: unable to init TxnManager, will retry in 1.000s
I20251025 14:08:19.112162 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:19.112452 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:19.113013 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:19.113096 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:19.113149 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:19.113188 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:19.117516 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:42941
I20251025 14:08:19.117571 31681 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:42941 every 8 connection(s)
I20251025 14:08:19.118654 31682 heartbeater.cc:344] Connected to a master server at 127.30.194.254:32837
I20251025 14:08:19.118743 31682 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:19.118907 31682 heartbeater.cc:507] Master 127.30.194.254:32837 requested a full tablet report, sending...
I20251025 14:08:19.119341 31529 ts_manager.cc:194] Registered new tserver with Master: c74e3662f4cd4607b3780a3f27cbd077 (127.30.194.193:42941)
I20251025 14:08:19.119995 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.002240548s
I20251025 14:08:19.120325 31529 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:59674
I20251025 14:08:20.106194 31529 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59694:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:20.110982 31635 tablet_service.cc:1505] Processing CreateTablet for tablet 286bfa4043b34727aefe46d6ff085c74 (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=4a49bb294b7b4dbd8a2b4d086122c7e9]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:20.111148 31635 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 286bfa4043b34727aefe46d6ff085c74. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:20.112422 31697 tablet_bootstrap.cc:492] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: Bootstrap starting.
I20251025 14:08:20.112805 31697 tablet_bootstrap.cc:654] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:20.113267 31697 tablet_bootstrap.cc:492] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: No bootstrap required, opened a new log
I20251025 14:08:20.113306 31697 ts_tablet_manager.cc:1403] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:20.113411 31697 raft_consensus.cc:359] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.113463 31697 raft_consensus.cc:385] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:20.113476 31697 raft_consensus.cc:740] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c74e3662f4cd4607b3780a3f27cbd077, State: Initialized, Role: FOLLOWER
I20251025 14:08:20.113513 31697 consensus_queue.cc:260] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.113560 31697 raft_consensus.cc:399] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:20.113577 31697 raft_consensus.cc:493] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:20.113595 31697 raft_consensus.cc:3060] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:20.114120 31697 raft_consensus.cc:515] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.114188 31697 leader_election.cc:304] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c74e3662f4cd4607b3780a3f27cbd077; no voters:
I20251025 14:08:20.114293 31697 leader_election.cc:290] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:20.114327 31699 raft_consensus.cc:2804] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:20.114419 31697 ts_tablet_manager.cc:1434] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:20.114488 31682 heartbeater.cc:499] Master 127.30.194.254:32837 was elected leader, sending a full tablet report...
I20251025 14:08:20.114465 31699 raft_consensus.cc:697] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 LEADER]: Becoming Leader. State: Replica: c74e3662f4cd4607b3780a3f27cbd077, State: Running, Role: LEADER
I20251025 14:08:20.114598 31699 consensus_queue.cc:237] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.114805 31700 tablet_replica.cc:442] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "c74e3662f4cd4607b3780a3f27cbd077" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } } }
I20251025 14:08:20.114845 31701 tablet_replica.cc:442] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: TxnStatusTablet state changed. Reason: New leader c74e3662f4cd4607b3780a3f27cbd077. Latest consensus state: current_term: 1 leader_uuid: "c74e3662f4cd4607b3780a3f27cbd077" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } } }
I20251025 14:08:20.114917 31700 tablet_replica.cc:445] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:20.114939 31701 tablet_replica.cc:445] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:20.115088 31704 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:20.115152 31704 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:20.115237 31529 catalog_manager.cc:5649] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 reported cstate change: term changed from 0 to 1, leader changed from <none> to c74e3662f4cd4607b3780a3f27cbd077 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c74e3662f4cd4607b3780a3f27cbd077" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:20.155200 31499 test_util.cc:276] Using random seed: 852327227
I20251025 14:08:20.160082 31529 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59720:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:20.161602 31635 tablet_service.cc:1505] Processing CreateTablet for tablet 9e3e39148fb9460dbe93e34e82eb3580 (DEFAULT_TABLE table=test-workload [id=08509ae07b7547269032de0d56b89cb7]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:20.161638 31636 tablet_service.cc:1505] Processing CreateTablet for tablet f5c8093f32a544b9984f2ed6dff93332 (DEFAULT_TABLE table=test-workload [id=08509ae07b7547269032de0d56b89cb7]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:20.161736 31635 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9e3e39148fb9460dbe93e34e82eb3580. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:20.161830 31636 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f5c8093f32a544b9984f2ed6dff93332. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:20.162907 31697 tablet_bootstrap.cc:492] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077: Bootstrap starting.
I20251025 14:08:20.163201 31697 tablet_bootstrap.cc:654] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:20.163704 31697 tablet_bootstrap.cc:492] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077: No bootstrap required, opened a new log
I20251025 14:08:20.163754 31697 ts_tablet_manager.cc:1403] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:20.163863 31697 raft_consensus.cc:359] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.163918 31697 raft_consensus.cc:385] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:20.163938 31697 raft_consensus.cc:740] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c74e3662f4cd4607b3780a3f27cbd077, State: Initialized, Role: FOLLOWER
I20251025 14:08:20.163990 31697 consensus_queue.cc:260] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.164031 31697 raft_consensus.cc:399] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:20.164049 31697 raft_consensus.cc:493] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:20.164076 31697 raft_consensus.cc:3060] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:20.164649 31697 raft_consensus.cc:515] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.164714 31697 leader_election.cc:304] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c74e3662f4cd4607b3780a3f27cbd077; no voters:
I20251025 14:08:20.164755 31697 leader_election.cc:290] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:20.164810 31699 raft_consensus.cc:2804] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:20.164829 31697 ts_tablet_manager.cc:1434] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:20.164858 31699 raft_consensus.cc:697] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 LEADER]: Becoming Leader. State: Replica: c74e3662f4cd4607b3780a3f27cbd077, State: Running, Role: LEADER
I20251025 14:08:20.164880 31697 tablet_bootstrap.cc:492] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077: Bootstrap starting.
I20251025 14:08:20.164891 31699 consensus_queue.cc:237] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.165194 31697 tablet_bootstrap.cc:654] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:20.165324 31529 catalog_manager.cc:5649] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 reported cstate change: term changed from 0 to 1, leader changed from <none> to c74e3662f4cd4607b3780a3f27cbd077 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c74e3662f4cd4607b3780a3f27cbd077" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:20.165763 31697 tablet_bootstrap.cc:492] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077: No bootstrap required, opened a new log
I20251025 14:08:20.165807 31697 ts_tablet_manager.cc:1403] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:20.165910 31697 raft_consensus.cc:359] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.165951 31697 raft_consensus.cc:385] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:20.165969 31697 raft_consensus.cc:740] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c74e3662f4cd4607b3780a3f27cbd077, State: Initialized, Role: FOLLOWER
I20251025 14:08:20.166005 31697 consensus_queue.cc:260] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.166038 31697 raft_consensus.cc:399] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:20.166056 31697 raft_consensus.cc:493] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:20.166075 31697 raft_consensus.cc:3060] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:20.166601 31697 raft_consensus.cc:515] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.166653 31697 leader_election.cc:304] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c74e3662f4cd4607b3780a3f27cbd077; no voters:
I20251025 14:08:20.166698 31697 leader_election.cc:290] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:20.166742 31699 raft_consensus.cc:2804] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:20.166769 31697 ts_tablet_manager.cc:1434] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:20.166821 31699 raft_consensus.cc:697] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 LEADER]: Becoming Leader. State: Replica: c74e3662f4cd4607b3780a3f27cbd077, State: Running, Role: LEADER
I20251025 14:08:20.166864 31699 consensus_queue.cc:237] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } }
I20251025 14:08:20.167279 31529 catalog_manager.cc:5649] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 reported cstate change: term changed from 0 to 1, leader changed from <none> to c74e3662f4cd4607b3780a3f27cbd077 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c74e3662f4cd4607b3780a3f27cbd077" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c74e3662f4cd4607b3780a3f27cbd077" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42941 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:20.252344 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:20.256568 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:20.256969 31499 tablet_replica.cc:333] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077: stopping tablet replica
I20251025 14:08:20.257099 31499 raft_consensus.cc:2243] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:20.257171 31499 raft_consensus.cc:2272] T 9e3e39148fb9460dbe93e34e82eb3580 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:20.257558 31499 tablet_replica.cc:333] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077: stopping tablet replica
I20251025 14:08:20.257608 31499 raft_consensus.cc:2243] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:20.257647 31499 raft_consensus.cc:2272] T 286bfa4043b34727aefe46d6ff085c74 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:20.257906 31499 tablet_replica.cc:333] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077: stopping tablet replica
I20251025 14:08:20.257952 31499 raft_consensus.cc:2243] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:20.257997 31499 raft_consensus.cc:2272] T f5c8093f32a544b9984f2ed6dff93332 P c74e3662f4cd4607b3780a3f27cbd077 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:20.270577 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:20.272022 31499 master.cc:561] Master@127.30.194.254:32837 shutting down...
I20251025 14:08:20.274953 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:20.275029 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:20.275071 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 2a1f67557ec14b329091094b5225cb62: stopping tablet replica
I20251025 14:08:20.286978 31499 master.cc:583] Master@127.30.194.254:32837 shutdown complete.
[ OK ] TxnCommitITest.TestBasicCommits (1251 ms)
[ RUN ] TxnCommitITest.TestBasicAborts
I20251025 14:08:20.291600 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:43385
I20251025 14:08:20.291775 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:20.292903 31742 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:20.292925 31741 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:20.293151 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:20.292933 31744 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:20.293352 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:20.293390 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:20.293403 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401300293403 us; error 0 us; skew 500 ppm
I20251025 14:08:20.293973 31499 webserver.cc:492] Webserver started at http://127.30.194.254:43263/ using document root <none> and password file <none>
I20251025 14:08:20.294054 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:20.294095 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:20.294138 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:20.294401 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "d8a78ba1a093479486f35a5364f47c9e"
format_stamp: "Formatted at 2025-10-25 14:08:20 on dist-test-slave-v4l2"
I20251025 14:08:20.295245 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:20.295693 31749 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:20.295814 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:20.295879 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "d8a78ba1a093479486f35a5364f47c9e"
format_stamp: "Formatted at 2025-10-25 14:08:20 on dist-test-slave-v4l2"
I20251025 14:08:20.295928 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:20.310979 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:20.311196 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:20.314687 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:43385
I20251025 14:08:20.317737 31811 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:43385 every 8 connection(s)
I20251025 14:08:20.317888 31812 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:20.318843 31812 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e: Bootstrap starting.
I20251025 14:08:20.319113 31812 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:20.319586 31812 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e: No bootstrap required, opened a new log
I20251025 14:08:20.319698 31812 raft_consensus.cc:359] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d8a78ba1a093479486f35a5364f47c9e" member_type: VOTER }
I20251025 14:08:20.319749 31812 raft_consensus.cc:385] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:20.319769 31812 raft_consensus.cc:740] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d8a78ba1a093479486f35a5364f47c9e, State: Initialized, Role: FOLLOWER
I20251025 14:08:20.319823 31812 consensus_queue.cc:260] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d8a78ba1a093479486f35a5364f47c9e" member_type: VOTER }
I20251025 14:08:20.319866 31812 raft_consensus.cc:399] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:20.319892 31812 raft_consensus.cc:493] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:20.319923 31812 raft_consensus.cc:3060] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:20.320365 31812 raft_consensus.cc:515] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d8a78ba1a093479486f35a5364f47c9e" member_type: VOTER }
I20251025 14:08:20.320420 31812 leader_election.cc:304] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d8a78ba1a093479486f35a5364f47c9e; no voters:
I20251025 14:08:20.320533 31812 leader_election.cc:290] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:20.320578 31815 raft_consensus.cc:2804] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:20.320688 31812 sys_catalog.cc:565] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:20.320688 31815 raft_consensus.cc:697] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 1 LEADER]: Becoming Leader. State: Replica: d8a78ba1a093479486f35a5364f47c9e, State: Running, Role: LEADER
I20251025 14:08:20.320766 31815 consensus_queue.cc:237] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d8a78ba1a093479486f35a5364f47c9e" member_type: VOTER }
I20251025 14:08:20.320930 31815 sys_catalog.cc:455] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [sys.catalog]: SysCatalogTable state changed. Reason: New leader d8a78ba1a093479486f35a5364f47c9e. Latest consensus state: current_term: 1 leader_uuid: "d8a78ba1a093479486f35a5364f47c9e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d8a78ba1a093479486f35a5364f47c9e" member_type: VOTER } }
I20251025 14:08:20.320976 31815 sys_catalog.cc:458] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:20.321055 31816 sys_catalog.cc:455] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "d8a78ba1a093479486f35a5364f47c9e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d8a78ba1a093479486f35a5364f47c9e" member_type: VOTER } }
I20251025 14:08:20.321237 31816 sys_catalog.cc:458] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:20.321959 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
W20251025 14:08:20.322104 31831 catalog_manager.cc:1568] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20251025 14:08:20.322186 31831 catalog_manager.cc:883] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20251025 14:08:20.322540 31819 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:20.322871 31819 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:20.323478 31819 catalog_manager.cc:1357] Generated new cluster ID: 8dd9bf771fcd4e9abe32a6b3b090eb22
I20251025 14:08:20.323527 31819 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:20.330291 31819 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:20.330698 31819 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:20.337986 31819 catalog_manager.cc:6022] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e: Generated new TSK 0
I20251025 14:08:20.338090 31819 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:20.342437 31766 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59870:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:20.353729 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:20.354935 31840 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:20.354990 31842 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:20.355015 31839 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:20.355096 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:20.355346 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:20.355391 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:20.355412 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401300355411 us; error 0 us; skew 500 ppm
I20251025 14:08:20.355980 31499 webserver.cc:492] Webserver started at http://127.30.194.193:32841/ using document root <none> and password file <none>
I20251025 14:08:20.356066 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:20.356113 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:20.356161 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:20.356469 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "c4dac8f910494981b20adc329318ec3a"
format_stamp: "Formatted at 2025-10-25 14:08:20 on dist-test-slave-v4l2"
I20251025 14:08:20.357671 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.002s sys 0.000s
I20251025 14:08:20.358161 31847 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:20.358299 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:08:20.358351 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "c4dac8f910494981b20adc329318ec3a"
format_stamp: "Formatted at 2025-10-25 14:08:20 on dist-test-slave-v4l2"
I20251025 14:08:20.358399 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBasicAborts.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:20.390616 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:20.390832 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:20.391348 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:20.391394 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:20.391426 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:20.391443 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:20.395027 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:44069
I20251025 14:08:20.395064 31912 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:44069 every 8 connection(s)
I20251025 14:08:20.395686 31913 heartbeater.cc:344] Connected to a master server at 127.30.194.254:43385
I20251025 14:08:20.395754 31913 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:20.395872 31913 heartbeater.cc:507] Master 127.30.194.254:43385 requested a full tablet report, sending...
I20251025 14:08:20.396131 31766 ts_manager.cc:194] Registered new tserver with Master: c4dac8f910494981b20adc329318ec3a (127.30.194.193:44069)
I20251025 14:08:20.396338 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001121295s
I20251025 14:08:20.397024 31766 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:59882
I20251025 14:08:21.347645 31766 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59908:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:21.351605 31876 tablet_service.cc:1505] Processing CreateTablet for tablet d24d214cfd8941cdb6ded2d1328981cc (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=53a158ae4a434c618f943c8c937b0e2b]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:21.351730 31876 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d24d214cfd8941cdb6ded2d1328981cc. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:21.352806 31933 tablet_bootstrap.cc:492] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: Bootstrap starting.
I20251025 14:08:21.353207 31933 tablet_bootstrap.cc:654] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:21.353654 31933 tablet_bootstrap.cc:492] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: No bootstrap required, opened a new log
I20251025 14:08:21.353691 31933 ts_tablet_manager.cc:1403] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:21.353792 31933 raft_consensus.cc:359] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.353840 31933 raft_consensus.cc:385] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:21.353854 31933 raft_consensus.cc:740] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c4dac8f910494981b20adc329318ec3a, State: Initialized, Role: FOLLOWER
I20251025 14:08:21.353904 31933 consensus_queue.cc:260] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.353942 31933 raft_consensus.cc:399] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:21.353966 31933 raft_consensus.cc:493] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:21.353992 31933 raft_consensus.cc:3060] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:21.354488 31933 raft_consensus.cc:515] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.354545 31933 leader_election.cc:304] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c4dac8f910494981b20adc329318ec3a; no voters:
I20251025 14:08:21.354671 31933 leader_election.cc:290] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:21.354717 31935 raft_consensus.cc:2804] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:21.354802 31933 ts_tablet_manager.cc:1434] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:21.354848 31935 raft_consensus.cc:697] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 1 LEADER]: Becoming Leader. State: Replica: c4dac8f910494981b20adc329318ec3a, State: Running, Role: LEADER
I20251025 14:08:21.354883 31913 heartbeater.cc:499] Master 127.30.194.254:43385 was elected leader, sending a full tablet report...
I20251025 14:08:21.354902 31935 consensus_queue.cc:237] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.355064 31935 tablet_replica.cc:442] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: TxnStatusTablet state changed. Reason: New leader c4dac8f910494981b20adc329318ec3a. Latest consensus state: current_term: 1 leader_uuid: "c4dac8f910494981b20adc329318ec3a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } } }
I20251025 14:08:21.355120 31935 tablet_replica.cc:445] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:21.355093 31936 tablet_replica.cc:442] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "c4dac8f910494981b20adc329318ec3a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } } }
I20251025 14:08:21.355152 31936 tablet_replica.cc:445] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:21.355311 31766 catalog_manager.cc:5649] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a reported cstate change: term changed from 0 to 1, leader changed from <none> to c4dac8f910494981b20adc329318ec3a (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c4dac8f910494981b20adc329318ec3a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:21.355427 31939 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:21.355486 31939 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:21.430135 31499 test_util.cc:276] Using random seed: 853602164
I20251025 14:08:21.434193 31766 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59930:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:21.435566 31876 tablet_service.cc:1505] Processing CreateTablet for tablet abd821122fd74565ab11eac42293fa42 (DEFAULT_TABLE table=test-workload [id=7ec27c90115f45a6ab0104816a1603a5]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:21.435591 31877 tablet_service.cc:1505] Processing CreateTablet for tablet 804b1b63c2c2402db3f9435793350573 (DEFAULT_TABLE table=test-workload [id=7ec27c90115f45a6ab0104816a1603a5]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:21.435706 31877 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 804b1b63c2c2402db3f9435793350573. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:21.435767 31876 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet abd821122fd74565ab11eac42293fa42. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:21.437068 31933 tablet_bootstrap.cc:492] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a: Bootstrap starting.
I20251025 14:08:21.437430 31933 tablet_bootstrap.cc:654] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:21.437995 31933 tablet_bootstrap.cc:492] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a: No bootstrap required, opened a new log
I20251025 14:08:21.438040 31933 ts_tablet_manager.cc:1403] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:21.438175 31933 raft_consensus.cc:359] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.438223 31933 raft_consensus.cc:385] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:21.438238 31933 raft_consensus.cc:740] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c4dac8f910494981b20adc329318ec3a, State: Initialized, Role: FOLLOWER
I20251025 14:08:21.438288 31933 consensus_queue.cc:260] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.438329 31933 raft_consensus.cc:399] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:21.438344 31933 raft_consensus.cc:493] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:21.438374 31933 raft_consensus.cc:3060] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:21.438994 31933 raft_consensus.cc:515] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.439052 31933 leader_election.cc:304] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c4dac8f910494981b20adc329318ec3a; no voters:
I20251025 14:08:21.439095 31933 leader_election.cc:290] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:21.439141 31936 raft_consensus.cc:2804] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:21.439189 31933 ts_tablet_manager.cc:1434] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:21.439203 31936 raft_consensus.cc:697] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 1 LEADER]: Becoming Leader. State: Replica: c4dac8f910494981b20adc329318ec3a, State: Running, Role: LEADER
I20251025 14:08:21.439246 31933 tablet_bootstrap.cc:492] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a: Bootstrap starting.
I20251025 14:08:21.439249 31936 consensus_queue.cc:237] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.439519 31933 tablet_bootstrap.cc:654] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:21.439710 31766 catalog_manager.cc:5649] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a reported cstate change: term changed from 0 to 1, leader changed from <none> to c4dac8f910494981b20adc329318ec3a (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c4dac8f910494981b20adc329318ec3a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:21.440032 31933 tablet_bootstrap.cc:492] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a: No bootstrap required, opened a new log
I20251025 14:08:21.440084 31933 ts_tablet_manager.cc:1403] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:21.440243 31933 raft_consensus.cc:359] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.440299 31933 raft_consensus.cc:385] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:21.440322 31933 raft_consensus.cc:740] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c4dac8f910494981b20adc329318ec3a, State: Initialized, Role: FOLLOWER
I20251025 14:08:21.440385 31933 consensus_queue.cc:260] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.440439 31933 raft_consensus.cc:399] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:21.440461 31933 raft_consensus.cc:493] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:21.440490 31933 raft_consensus.cc:3060] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:21.441038 31933 raft_consensus.cc:515] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.441110 31933 leader_election.cc:304] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c4dac8f910494981b20adc329318ec3a; no voters:
I20251025 14:08:21.441159 31933 leader_election.cc:290] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:21.441180 31936 raft_consensus.cc:2804] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:21.441231 31936 raft_consensus.cc:697] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 1 LEADER]: Becoming Leader. State: Replica: c4dac8f910494981b20adc329318ec3a, State: Running, Role: LEADER
I20251025 14:08:21.441242 31933 ts_tablet_manager.cc:1434] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:21.441263 31936 consensus_queue.cc:237] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } }
I20251025 14:08:21.441627 31766 catalog_manager.cc:5649] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a reported cstate change: term changed from 0 to 1, leader changed from <none> to c4dac8f910494981b20adc329318ec3a (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c4dac8f910494981b20adc329318ec3a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c4dac8f910494981b20adc329318ec3a" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 44069 } health_report { overall_health: HEALTHY } } }
W20251025 14:08:21.514881 31965 tablet_replica.cc:1306] Aborted: operation has been aborted: cancelling pending write operations
I20251025 14:08:21.517758 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:21.520344 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:21.520578 31499 tablet_replica.cc:333] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a: stopping tablet replica
I20251025 14:08:21.520661 31499 raft_consensus.cc:2243] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:21.520700 31499 raft_consensus.cc:2272] T 804b1b63c2c2402db3f9435793350573 P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:21.521003 31499 tablet_replica.cc:333] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a: stopping tablet replica
I20251025 14:08:21.521047 31499 raft_consensus.cc:2243] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:21.521077 31499 raft_consensus.cc:2272] T d24d214cfd8941cdb6ded2d1328981cc P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:21.521306 31499 tablet_replica.cc:333] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a: stopping tablet replica
I20251025 14:08:21.521342 31499 raft_consensus.cc:2243] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:21.521368 31499 raft_consensus.cc:2272] T abd821122fd74565ab11eac42293fa42 P c4dac8f910494981b20adc329318ec3a [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:21.533500 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:21.534649 31499 master.cc:561] Master@127.30.194.254:43385 shutting down...
I20251025 14:08:21.538106 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:21.538170 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:21.538201 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P d8a78ba1a093479486f35a5364f47c9e: stopping tablet replica
I20251025 14:08:21.549882 31499 master.cc:583] Master@127.30.194.254:43385 shutdown complete.
[ OK ] TxnCommitITest.TestBasicAborts (1261 ms)
[ RUN ] TxnCommitITest.TestAbortInProgress
I20251025 14:08:21.553521 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:46577
I20251025 14:08:21.553653 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:21.554611 31976 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:21.554687 31977 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:21.554632 31979 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:21.554961 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:21.555037 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:21.555066 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:21.555092 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401301555093 us; error 0 us; skew 500 ppm
I20251025 14:08:21.555651 31499 webserver.cc:492] Webserver started at http://127.30.194.254:43121/ using document root <none> and password file <none>
I20251025 14:08:21.555721 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:21.555763 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:21.555814 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:21.556053 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "498e1a9cb85c4ac598016b957bd21e6e"
format_stamp: "Formatted at 2025-10-25 14:08:21 on dist-test-slave-v4l2"
I20251025 14:08:21.556869 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:21.557348 31984 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:21.557447 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:21.557493 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "498e1a9cb85c4ac598016b957bd21e6e"
format_stamp: "Formatted at 2025-10-25 14:08:21 on dist-test-slave-v4l2"
I20251025 14:08:21.557540 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:21.569093 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:21.569264 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:21.571885 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:46577
I20251025 14:08:21.574054 32046 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:46577 every 8 connection(s)
I20251025 14:08:21.574184 32047 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:21.575126 32047 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e: Bootstrap starting.
I20251025 14:08:21.575403 32047 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:21.575829 32047 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e: No bootstrap required, opened a new log
I20251025 14:08:21.575939 32047 raft_consensus.cc:359] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "498e1a9cb85c4ac598016b957bd21e6e" member_type: VOTER }
I20251025 14:08:21.575986 32047 raft_consensus.cc:385] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:21.576009 32047 raft_consensus.cc:740] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 498e1a9cb85c4ac598016b957bd21e6e, State: Initialized, Role: FOLLOWER
I20251025 14:08:21.576059 32047 consensus_queue.cc:260] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "498e1a9cb85c4ac598016b957bd21e6e" member_type: VOTER }
I20251025 14:08:21.576105 32047 raft_consensus.cc:399] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:21.576129 32047 raft_consensus.cc:493] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:21.576161 32047 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:21.576612 32047 raft_consensus.cc:515] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "498e1a9cb85c4ac598016b957bd21e6e" member_type: VOTER }
I20251025 14:08:21.576670 32047 leader_election.cc:304] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 498e1a9cb85c4ac598016b957bd21e6e; no voters:
I20251025 14:08:21.576769 32047 leader_election.cc:290] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:21.576802 32050 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:21.576889 32047 sys_catalog.cc:565] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:21.576912 32050 raft_consensus.cc:697] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 1 LEADER]: Becoming Leader. State: Replica: 498e1a9cb85c4ac598016b957bd21e6e, State: Running, Role: LEADER
I20251025 14:08:21.576951 32050 consensus_queue.cc:237] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "498e1a9cb85c4ac598016b957bd21e6e" member_type: VOTER }
I20251025 14:08:21.577132 32051 sys_catalog.cc:455] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [sys.catalog]: SysCatalogTable state changed. Reason: New leader 498e1a9cb85c4ac598016b957bd21e6e. Latest consensus state: current_term: 1 leader_uuid: "498e1a9cb85c4ac598016b957bd21e6e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "498e1a9cb85c4ac598016b957bd21e6e" member_type: VOTER } }
I20251025 14:08:21.577149 32052 sys_catalog.cc:455] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "498e1a9cb85c4ac598016b957bd21e6e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "498e1a9cb85c4ac598016b957bd21e6e" member_type: VOTER } }
I20251025 14:08:21.577206 32051 sys_catalog.cc:458] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:21.577211 32052 sys_catalog.cc:458] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:21.577384 32054 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:21.577543 32054 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:21.578104 32054 catalog_manager.cc:1357] Generated new cluster ID: ae402dc4fe5149d09a9b0cf9a60f3079
I20251025 14:08:21.578157 32054 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:21.578238 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:21.593237 32054 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:21.593607 32054 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:21.601120 32054 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e: Generated new TSK 0
I20251025 14:08:21.601197 32054 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:21.603555 32001 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41506:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:21.609944 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:21.611159 32074 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:21.611335 32077 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:21.611347 32075 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:21.611349 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:21.611516 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:21.611549 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:21.611575 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401301611575 us; error 0 us; skew 500 ppm
I20251025 14:08:21.612104 31499 webserver.cc:492] Webserver started at http://127.30.194.193:33971/ using document root <none> and password file <none>
I20251025 14:08:21.612182 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:21.612226 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:21.612273 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:21.612535 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "9037b63836be4aa8881919d2f92bddbe"
format_stamp: "Formatted at 2025-10-25 14:08:21 on dist-test-slave-v4l2"
I20251025 14:08:21.613428 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:21.613888 32082 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:21.614007 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:21.614050 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "9037b63836be4aa8881919d2f92bddbe"
format_stamp: "Formatted at 2025-10-25 14:08:21 on dist-test-slave-v4l2"
I20251025 14:08:21.614094 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:21.618659 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:21.618831 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:21.619119 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:21.619153 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:21.619179 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:21.619215 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:21.622670 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:40149
I20251025 14:08:21.622704 32147 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:40149 every 8 connection(s)
I20251025 14:08:21.623070 32148 heartbeater.cc:344] Connected to a master server at 127.30.194.254:46577
I20251025 14:08:21.623315 32148 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:21.623440 32148 heartbeater.cc:507] Master 127.30.194.254:46577 requested a full tablet report, sending...
I20251025 14:08:21.623646 32001 ts_manager.cc:194] Registered new tserver with Master: 9037b63836be4aa8881919d2f92bddbe (127.30.194.193:40149)
I20251025 14:08:21.623977 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001123179s
I20251025 14:08:21.624289 32001 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:41508
I20251025 14:08:22.611943 32001 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41528:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:22.615832 32112 tablet_service.cc:1505] Processing CreateTablet for tablet eed22feb81fa4447950719e95c6bb7dd (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=2d810cc89598429393653197f24e2968]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:22.615958 32112 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet eed22feb81fa4447950719e95c6bb7dd. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:22.617116 32168 tablet_bootstrap.cc:492] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Bootstrap starting.
I20251025 14:08:22.617441 32168 tablet_bootstrap.cc:654] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:22.617954 32168 tablet_bootstrap.cc:492] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: No bootstrap required, opened a new log
I20251025 14:08:22.618005 32168 ts_tablet_manager.cc:1403] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:22.618135 32168 raft_consensus.cc:359] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.618192 32168 raft_consensus.cc:385] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:22.618216 32168 raft_consensus.cc:740] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.618269 32168 consensus_queue.cc:260] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.618310 32168 raft_consensus.cc:399] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.618335 32168 raft_consensus.cc:493] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.618363 32168 raft_consensus.cc:3060] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:22.618832 32168 raft_consensus.cc:515] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.618909 32168 leader_election.cc:304] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9037b63836be4aa8881919d2f92bddbe; no voters:
I20251025 14:08:22.619026 32168 leader_election.cc:290] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:22.619068 32170 raft_consensus.cc:2804] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:22.619171 32168 ts_tablet_manager.cc:1434] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:22.619211 32148 heartbeater.cc:499] Master 127.30.194.254:46577 was elected leader, sending a full tablet report...
I20251025 14:08:22.619207 32170 raft_consensus.cc:697] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 LEADER]: Becoming Leader. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Running, Role: LEADER
I20251025 14:08:22.619303 32170 consensus_queue.cc:237] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.619516 32171 tablet_replica.cc:442] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } } }
I20251025 14:08:22.619556 32172 tablet_replica.cc:442] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: TxnStatusTablet state changed. Reason: New leader 9037b63836be4aa8881919d2f92bddbe. Latest consensus state: current_term: 1 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } } }
I20251025 14:08:22.619647 32171 tablet_replica.cc:445] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:22.619670 32172 tablet_replica.cc:445] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:22.619741 32001 catalog_manager.cc:5649] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe reported cstate change: term changed from 0 to 1, leader changed from <none> to 9037b63836be4aa8881919d2f92bddbe (127.30.194.193). New cstate: current_term: 1 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:22.619829 32174 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:22.619881 32174 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:22.657770 31499 test_util.cc:276] Using random seed: 854829800
I20251025 14:08:22.661770 32001 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:41548:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:22.663055 32112 tablet_service.cc:1505] Processing CreateTablet for tablet 0f293353cb3849e690958cc3b8ecd063 (DEFAULT_TABLE table=test-workload [id=931b68e27d55433a96e5fe2525e7f2b6]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:22.663086 32111 tablet_service.cc:1505] Processing CreateTablet for tablet fe0a1912011042f8a898415d79c61705 (DEFAULT_TABLE table=test-workload [id=931b68e27d55433a96e5fe2525e7f2b6]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:22.663175 32111 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fe0a1912011042f8a898415d79c61705. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:22.663267 32112 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0f293353cb3849e690958cc3b8ecd063. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:22.664458 32168 tablet_bootstrap.cc:492] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap starting.
I20251025 14:08:22.664856 32168 tablet_bootstrap.cc:654] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:22.665423 32168 tablet_bootstrap.cc:492] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: No bootstrap required, opened a new log
I20251025 14:08:22.665467 32168 ts_tablet_manager.cc:1403] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:22.665582 32168 raft_consensus.cc:359] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.665623 32168 raft_consensus.cc:385] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:22.665637 32168 raft_consensus.cc:740] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.665676 32168 consensus_queue.cc:260] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.665728 32168 raft_consensus.cc:399] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.665755 32168 raft_consensus.cc:493] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.665776 32168 raft_consensus.cc:3060] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:22.666307 32168 raft_consensus.cc:515] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.666370 32168 leader_election.cc:304] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9037b63836be4aa8881919d2f92bddbe; no voters:
I20251025 14:08:22.666401 32168 leader_election.cc:290] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:22.666432 32172 raft_consensus.cc:2804] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:22.666479 32172 raft_consensus.cc:697] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 LEADER]: Becoming Leader. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Running, Role: LEADER
I20251025 14:08:22.666491 32168 ts_tablet_manager.cc:1434] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:22.666539 32168 tablet_bootstrap.cc:492] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap starting.
I20251025 14:08:22.666865 32168 tablet_bootstrap.cc:654] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:22.666523 32172 consensus_queue.cc:237] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.667418 32168 tablet_bootstrap.cc:492] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: No bootstrap required, opened a new log
I20251025 14:08:22.667466 32168 ts_tablet_manager.cc:1403] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:22.667475 32001 catalog_manager.cc:5649] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe reported cstate change: term changed from 0 to 1, leader changed from <none> to 9037b63836be4aa8881919d2f92bddbe (127.30.194.193). New cstate: current_term: 1 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:22.667622 32168 raft_consensus.cc:359] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.667670 32168 raft_consensus.cc:385] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:22.667696 32168 raft_consensus.cc:740] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.667745 32168 consensus_queue.cc:260] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.667783 32168 raft_consensus.cc:399] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.667800 32168 raft_consensus.cc:493] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.667826 32168 raft_consensus.cc:3060] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:22.668267 32168 raft_consensus.cc:515] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.668316 32168 leader_election.cc:304] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9037b63836be4aa8881919d2f92bddbe; no voters:
I20251025 14:08:22.668450 32168 leader_election.cc:290] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:22.668478 32172 raft_consensus.cc:2804] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:22.668531 32172 raft_consensus.cc:697] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 LEADER]: Becoming Leader. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Running, Role: LEADER
I20251025 14:08:22.668560 32168 ts_tablet_manager.cc:1434] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:22.668571 32172 consensus_queue.cc:237] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.669057 32001 catalog_manager.cc:5649] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe reported cstate change: term changed from 0 to 1, leader changed from <none> to 9037b63836be4aa8881919d2f92bddbe (127.30.194.193). New cstate: current_term: 1 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:22.741480 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:22.743871 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:22.744064 31499 tablet_replica.cc:333] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: stopping tablet replica
I20251025 14:08:22.744145 31499 raft_consensus.cc:2243] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.744194 31499 raft_consensus.cc:2272] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.744496 31499 tablet_replica.cc:333] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: stopping tablet replica
I20251025 14:08:22.744540 31499 raft_consensus.cc:2243] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.744589 31499 raft_consensus.cc:2272] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.744796 31499 tablet_replica.cc:333] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: stopping tablet replica
I20251025 14:08:22.744840 31499 raft_consensus.cc:2243] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.744879 31499 raft_consensus.cc:2272] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.756947 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:22.758235 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:22.759249 32210 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:22.759267 32213 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:22.759460 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:22.759267 32211 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:22.759605 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:22.759636 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:22.759660 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401302759659 us; error 0 us; skew 500 ppm
I20251025 14:08:22.760222 31499 webserver.cc:492] Webserver started at http://127.30.194.193:33971/ using document root <none> and password file <none>
I20251025 14:08:22.760303 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:22.760344 31499 fs_manager.cc:365] Using existing metadata directory in first data directory
I20251025 14:08:22.760823 31499 fs_manager.cc:714] Time spent opening directory manager: real 0.000s user 0.000s sys 0.001s
I20251025 14:08:22.761354 32218 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.761500 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.761541 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "9037b63836be4aa8881919d2f92bddbe"
format_stamp: "Formatted at 2025-10-25 14:08:21 on dist-test-slave-v4l2"
I20251025 14:08:22.761595 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortInProgress.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:22.771793 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:22.771962 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:22.772418 32226 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20251025 14:08:22.773947 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20251025 14:08:22.773991 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.002s user 0.000s sys 0.000s
I20251025 14:08:22.774016 31499 ts_tablet_manager.cc:600] Registering tablets (0/3 complete)
I20251025 14:08:22.774518 32226 tablet_bootstrap.cc:492] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap starting.
I20251025 14:08:22.775141 31499 ts_tablet_manager.cc:616] Registered 3 tablets
I20251025 14:08:22.775189 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:22.778623 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:40149
I20251025 14:08:22.779592 32290 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:40149 every 8 connection(s)
I20251025 14:08:22.781733 32291 heartbeater.cc:344] Connected to a master server at 127.30.194.254:46577
I20251025 14:08:22.781785 32291 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:22.781889 32291 heartbeater.cc:507] Master 127.30.194.254:46577 requested a full tablet report, sending...
I20251025 14:08:22.782136 31998 ts_manager.cc:194] Re-registered known tserver with Master: 9037b63836be4aa8881919d2f92bddbe (127.30.194.193:40149)
I20251025 14:08:22.782663 31998 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:41550
I20251025 14:08:22.789690 32226 tablet_bootstrap.cc:492] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap replayed 1/1 log segments. Stats: ops{read=111 overwritten=0 applied=111 ignored=0} inserts{seen=2621 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:22.789914 32226 tablet_bootstrap.cc:492] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap complete.
I20251025 14:08:22.790014 32226 ts_tablet_manager.cc:1403] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Time spent bootstrapping tablet: real 0.016s user 0.011s sys 0.003s
I20251025 14:08:22.790105 32226 raft_consensus.cc:359] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.790151 32226 raft_consensus.cc:740] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.790200 32226 consensus_queue.cc:260] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 111, Last appended: 1.111, Last appended by leader: 111, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.790247 32226 raft_consensus.cc:399] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.790266 32226 raft_consensus.cc:493] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.790284 32226 raft_consensus.cc:3060] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:08:22.790774 32226 raft_consensus.cc:515] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.790827 32226 leader_election.cc:304] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9037b63836be4aa8881919d2f92bddbe; no voters:
I20251025 14:08:22.790920 32226 leader_election.cc:290] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:08:22.790972 32294 raft_consensus.cc:2804] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:08:22.791024 32226 ts_tablet_manager.cc:1434] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe: Time spent starting tablet: real 0.001s user 0.002s sys 0.000s
I20251025 14:08:22.791046 32294 raft_consensus.cc:697] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 2 LEADER]: Becoming Leader. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Running, Role: LEADER
I20251025 14:08:22.791059 32291 heartbeater.cc:499] Master 127.30.194.254:46577 was elected leader, sending a full tablet report...
I20251025 14:08:22.791082 32226 tablet_bootstrap.cc:492] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Bootstrap starting.
I20251025 14:08:22.791087 32294 consensus_queue.cc:237] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 111, Committed index: 111, Last appended: 1.111, Last appended by leader: 111, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.791505 31998 catalog_manager.cc:5649] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:22.792388 32226 tablet_bootstrap.cc:492] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=3 ignored=0} mutations{seen=1 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:22.792613 32226 tablet_bootstrap.cc:492] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Bootstrap complete.
I20251025 14:08:22.792701 32226 ts_tablet_manager.cc:1403] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Time spent bootstrapping tablet: real 0.002s user 0.001s sys 0.000s
I20251025 14:08:22.792798 32226 raft_consensus.cc:359] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.792842 32226 raft_consensus.cc:740] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.792891 32226 consensus_queue.cc:260] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 1.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.792927 32226 raft_consensus.cc:399] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.792958 32226 raft_consensus.cc:493] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.793006 32226 raft_consensus.cc:3060] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:08:22.793481 32226 raft_consensus.cc:515] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.793535 32226 leader_election.cc:304] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9037b63836be4aa8881919d2f92bddbe; no voters:
I20251025 14:08:22.793576 32226 leader_election.cc:290] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:08:22.793601 32294 raft_consensus.cc:2804] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:08:22.793653 32226 ts_tablet_manager.cc:1434] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:22.793705 32294 raft_consensus.cc:697] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 2 LEADER]: Becoming Leader. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Running, Role: LEADER
I20251025 14:08:22.793705 32226 tablet_bootstrap.cc:492] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap starting.
I20251025 14:08:22.793767 32294 consensus_queue.cc:237] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 1.5, Last appended by leader: 5, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.794097 32295 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } } }
I20251025 14:08:22.794200 32299 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: New leader 9037b63836be4aa8881919d2f92bddbe. Latest consensus state: current_term: 2 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } } }
I20251025 14:08:22.794202 31998 catalog_manager.cc:5649] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:22.794220 32295 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:22.794294 32299 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:22.794565 32302 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:22.794617 32302 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:22.794819 32302 txn_status_manager.cc:728] Starting 1 aborts task
I20251025 14:08:22.805853 32226 tablet_bootstrap.cc:492] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap replayed 1/1 log segments. Stats: ops{read=111 overwritten=0 applied=111 ignored=0} inserts{seen=2589 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:22.806085 32226 tablet_bootstrap.cc:492] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Bootstrap complete.
I20251025 14:08:22.806202 32226 ts_tablet_manager.cc:1403] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Time spent bootstrapping tablet: real 0.013s user 0.008s sys 0.003s
I20251025 14:08:22.806321 32226 raft_consensus.cc:359] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.806370 32226 raft_consensus.cc:740] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.806420 32226 consensus_queue.cc:260] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 111, Last appended: 1.111, Last appended by leader: 111, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.806463 32226 raft_consensus.cc:399] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.806491 32226 raft_consensus.cc:493] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.806521 32226 raft_consensus.cc:3060] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:08:22.807024 32226 raft_consensus.cc:515] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.807080 32226 leader_election.cc:304] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9037b63836be4aa8881919d2f92bddbe; no voters:
I20251025 14:08:22.807121 32226 leader_election.cc:290] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:08:22.807183 32299 raft_consensus.cc:2804] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:08:22.807204 32226 ts_tablet_manager.cc:1434] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:22.807229 32299 raft_consensus.cc:697] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 2 LEADER]: Becoming Leader. State: Replica: 9037b63836be4aa8881919d2f92bddbe, State: Running, Role: LEADER
I20251025 14:08:22.807268 32299 consensus_queue.cc:237] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 111, Committed index: 111, Last appended: 1.111, Last appended by leader: 111, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } }
I20251025 14:08:22.807654 32001 catalog_manager.cc:5649] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "9037b63836be4aa8881919d2f92bddbe" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9037b63836be4aa8881919d2f92bddbe" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40149 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:22.819000 31499 tablet_server.cc:178] TabletServer@127.30.194.193:40149 shutting down...
I20251025 14:08:22.821175 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:22.821334 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:08:22.821377 31499 raft_consensus.cc:2243] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.821406 31499 raft_consensus.cc:2272] T fe0a1912011042f8a898415d79c61705 P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.821682 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:08:22.821720 31499 raft_consensus.cc:2243] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.821744 31499 raft_consensus.cc:2272] T 0f293353cb3849e690958cc3b8ecd063 P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.822031 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:08:22.822067 31499 raft_consensus.cc:2243] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.822100 31499 raft_consensus.cc:2272] T eed22feb81fa4447950719e95c6bb7dd P 9037b63836be4aa8881919d2f92bddbe [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.833986 31499 tablet_server.cc:195] TabletServer@127.30.194.193:40149 shutdown complete.
I20251025 14:08:22.835361 31499 master.cc:561] Master@127.30.194.254:46577 shutting down...
I20251025 14:08:22.837971 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:22.838027 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:22.838045 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 498e1a9cb85c4ac598016b957bd21e6e: stopping tablet replica
I20251025 14:08:22.849581 31499 master.cc:583] Master@127.30.194.254:46577 shutdown complete.
[ OK ] TxnCommitITest.TestAbortInProgress (1299 ms)
[ RUN ] TxnCommitITest.TestBackgroundAborts
I20251025 14:08:22.853219 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:46835
I20251025 14:08:22.853358 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:22.854373 32307 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:22.854486 32308 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:22.854517 32310 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:22.854671 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:22.854751 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:22.854780 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:22.854802 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401302854801 us; error 0 us; skew 500 ppm
I20251025 14:08:22.855307 31499 webserver.cc:492] Webserver started at http://127.30.194.254:33867/ using document root <none> and password file <none>
I20251025 14:08:22.855378 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:22.855418 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:22.855469 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:22.855702 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "41891c3f79b94625973e9e8771d49e5c"
format_stamp: "Formatted at 2025-10-25 14:08:22 on dist-test-slave-v4l2"
I20251025 14:08:22.856530 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:22.856972 32315 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.857141 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.857188 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "41891c3f79b94625973e9e8771d49e5c"
format_stamp: "Formatted at 2025-10-25 14:08:22 on dist-test-slave-v4l2"
I20251025 14:08:22.857232 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:22.885459 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:22.885650 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:22.888396 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:46835
I20251025 14:08:22.889974 32377 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:46835 every 8 connection(s)
I20251025 14:08:22.890089 32378 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:22.891076 32378 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c: Bootstrap starting.
I20251025 14:08:22.891359 32378 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:22.891883 32378 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c: No bootstrap required, opened a new log
I20251025 14:08:22.892017 32378 raft_consensus.cc:359] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "41891c3f79b94625973e9e8771d49e5c" member_type: VOTER }
I20251025 14:08:22.892071 32378 raft_consensus.cc:385] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:22.892091 32378 raft_consensus.cc:740] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 41891c3f79b94625973e9e8771d49e5c, State: Initialized, Role: FOLLOWER
I20251025 14:08:22.892170 32378 consensus_queue.cc:260] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "41891c3f79b94625973e9e8771d49e5c" member_type: VOTER }
I20251025 14:08:22.892204 32378 raft_consensus.cc:399] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:22.892232 32378 raft_consensus.cc:493] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:22.892261 32378 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:22.892727 32378 raft_consensus.cc:515] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "41891c3f79b94625973e9e8771d49e5c" member_type: VOTER }
I20251025 14:08:22.892783 32378 leader_election.cc:304] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 41891c3f79b94625973e9e8771d49e5c; no voters:
I20251025 14:08:22.892885 32378 leader_election.cc:290] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:22.892930 32381 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:22.893070 32381 raft_consensus.cc:697] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 1 LEADER]: Becoming Leader. State: Replica: 41891c3f79b94625973e9e8771d49e5c, State: Running, Role: LEADER
I20251025 14:08:22.893069 32378 sys_catalog.cc:565] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:22.893167 32381 consensus_queue.cc:237] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "41891c3f79b94625973e9e8771d49e5c" member_type: VOTER }
I20251025 14:08:22.893455 32382 sys_catalog.cc:455] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "41891c3f79b94625973e9e8771d49e5c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "41891c3f79b94625973e9e8771d49e5c" member_type: VOTER } }
I20251025 14:08:22.893472 32383 sys_catalog.cc:455] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 41891c3f79b94625973e9e8771d49e5c. Latest consensus state: current_term: 1 leader_uuid: "41891c3f79b94625973e9e8771d49e5c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "41891c3f79b94625973e9e8771d49e5c" member_type: VOTER } }
I20251025 14:08:22.893610 32382 sys_catalog.cc:458] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:22.893642 32383 sys_catalog.cc:458] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:22.893814 32388 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:22.893944 32388 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:22.894447 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:22.894482 32388 catalog_manager.cc:1357] Generated new cluster ID: 3cbb6de330a7446e89094e83a4e63b86
I20251025 14:08:22.894520 32388 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:22.902560 32388 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:22.902981 32388 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:22.909301 32388 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c: Generated new TSK 0
I20251025 14:08:22.909406 32388 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:22.910090 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:22.911269 32406 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:22.911355 32405 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:22.911413 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:22.911518 32408 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:22.911625 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:22.911669 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:22.911686 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401302911687 us; error 0 us; skew 500 ppm
I20251025 14:08:22.912241 31499 webserver.cc:492] Webserver started at http://127.30.194.193:40759/ using document root <none> and password file <none>
I20251025 14:08:22.912315 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:22.912360 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:22.912417 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:22.912690 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "67879ead5baf453696f6ce534106157d"
format_stamp: "Formatted at 2025-10-25 14:08:22 on dist-test-slave-v4l2"
I20251025 14:08:22.912925 32332 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51074:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:22.913688 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:22.914075 32413 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.914202 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.914234 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "67879ead5baf453696f6ce534106157d"
format_stamp: "Formatted at 2025-10-25 14:08:22 on dist-test-slave-v4l2"
I20251025 14:08:22.914265 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestBackgroundAborts.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:22.929937 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:22.930143 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:22.930474 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:22.930518 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.930560 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:22.930586 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:22.933530 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:37115
I20251025 14:08:22.933727 32478 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:37115 every 8 connection(s)
I20251025 14:08:22.934063 32479 heartbeater.cc:344] Connected to a master server at 127.30.194.254:46835
I20251025 14:08:22.934124 32479 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:22.934255 32479 heartbeater.cc:507] Master 127.30.194.254:46835 requested a full tablet report, sending...
I20251025 14:08:22.934491 32332 ts_manager.cc:194] Registered new tserver with Master: 67879ead5baf453696f6ce534106157d (127.30.194.193:37115)
I20251025 14:08:22.934851 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001125894s
I20251025 14:08:22.935310 32332 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51080
I20251025 14:08:23.918020 32332 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51100:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:23.921967 32443 tablet_service.cc:1505] Processing CreateTablet for tablet fb75415940714677aed14a2b87cc3f50 (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=bbfd2167ad334d048720c68b5166ce18]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:23.922089 32443 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fb75415940714677aed14a2b87cc3f50. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:23.923285 32499 tablet_bootstrap.cc:492] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: Bootstrap starting.
I20251025 14:08:23.923602 32499 tablet_bootstrap.cc:654] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:23.924064 32499 tablet_bootstrap.cc:492] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: No bootstrap required, opened a new log
I20251025 14:08:23.924104 32499 ts_tablet_manager.cc:1403] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:23.924250 32499 raft_consensus.cc:359] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.924299 32499 raft_consensus.cc:385] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:23.924321 32499 raft_consensus.cc:740] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 67879ead5baf453696f6ce534106157d, State: Initialized, Role: FOLLOWER
I20251025 14:08:23.924378 32499 consensus_queue.cc:260] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.924412 32499 raft_consensus.cc:399] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:23.924453 32499 raft_consensus.cc:493] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:23.924480 32499 raft_consensus.cc:3060] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:23.924975 32499 raft_consensus.cc:515] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.925076 32499 leader_election.cc:304] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 67879ead5baf453696f6ce534106157d; no voters:
I20251025 14:08:23.925184 32499 leader_election.cc:290] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:23.925297 32499 ts_tablet_manager.cc:1434] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:23.925304 32502 raft_consensus.cc:2804] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:23.925374 32479 heartbeater.cc:499] Master 127.30.194.254:46835 was elected leader, sending a full tablet report...
I20251025 14:08:23.925457 32502 raft_consensus.cc:697] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 1 LEADER]: Becoming Leader. State: Replica: 67879ead5baf453696f6ce534106157d, State: Running, Role: LEADER
I20251025 14:08:23.925509 32502 consensus_queue.cc:237] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.925693 32501 tablet_replica.cc:442] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "67879ead5baf453696f6ce534106157d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } } }
I20251025 14:08:23.925726 32503 tablet_replica.cc:442] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: TxnStatusTablet state changed. Reason: New leader 67879ead5baf453696f6ce534106157d. Latest consensus state: current_term: 1 leader_uuid: "67879ead5baf453696f6ce534106157d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } } }
I20251025 14:08:23.925766 32501 tablet_replica.cc:445] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:23.925809 32503 tablet_replica.cc:445] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:23.925910 32332 catalog_manager.cc:5649] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d reported cstate change: term changed from 0 to 1, leader changed from <none> to 67879ead5baf453696f6ce534106157d (127.30.194.193). New cstate: current_term: 1 leader_uuid: "67879ead5baf453696f6ce534106157d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:23.926087 32505 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:23.926138 32505 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:23.967681 31499 test_util.cc:276] Using random seed: 856139710
I20251025 14:08:23.971663 32332 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51122:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:23.973156 32443 tablet_service.cc:1505] Processing CreateTablet for tablet 39a201448eac4f2fbc325917560b2a6a (DEFAULT_TABLE table=test-workload [id=d281ba9f5ff149d881b97b1214bf91c4]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:23.973181 32442 tablet_service.cc:1505] Processing CreateTablet for tablet ee039bb3afe646b5b7432cdd73084118 (DEFAULT_TABLE table=test-workload [id=d281ba9f5ff149d881b97b1214bf91c4]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:23.973275 32443 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 39a201448eac4f2fbc325917560b2a6a. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:23.973358 32442 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ee039bb3afe646b5b7432cdd73084118. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:23.974300 32499 tablet_bootstrap.cc:492] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d: Bootstrap starting.
I20251025 14:08:23.974684 32499 tablet_bootstrap.cc:654] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:23.975212 32499 tablet_bootstrap.cc:492] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d: No bootstrap required, opened a new log
I20251025 14:08:23.975253 32499 ts_tablet_manager.cc:1403] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:23.975389 32499 raft_consensus.cc:359] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.975436 32499 raft_consensus.cc:385] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:23.975450 32499 raft_consensus.cc:740] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 67879ead5baf453696f6ce534106157d, State: Initialized, Role: FOLLOWER
I20251025 14:08:23.975497 32499 consensus_queue.cc:260] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.975538 32499 raft_consensus.cc:399] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:23.975551 32499 raft_consensus.cc:493] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:23.975577 32499 raft_consensus.cc:3060] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:23.976038 32499 raft_consensus.cc:515] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.976092 32499 leader_election.cc:304] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 67879ead5baf453696f6ce534106157d; no voters:
I20251025 14:08:23.976153 32499 leader_election.cc:290] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:23.976202 32501 raft_consensus.cc:2804] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:23.976230 32499 ts_tablet_manager.cc:1434] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:23.976284 32501 raft_consensus.cc:697] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 1 LEADER]: Becoming Leader. State: Replica: 67879ead5baf453696f6ce534106157d, State: Running, Role: LEADER
I20251025 14:08:23.976318 32499 tablet_bootstrap.cc:492] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d: Bootstrap starting.
I20251025 14:08:23.976332 32501 consensus_queue.cc:237] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.976706 32499 tablet_bootstrap.cc:654] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:23.977234 32499 tablet_bootstrap.cc:492] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d: No bootstrap required, opened a new log
I20251025 14:08:23.977274 32499 ts_tablet_manager.cc:1403] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:23.977252 32332 catalog_manager.cc:5649] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d reported cstate change: term changed from 0 to 1, leader changed from <none> to 67879ead5baf453696f6ce534106157d (127.30.194.193). New cstate: current_term: 1 leader_uuid: "67879ead5baf453696f6ce534106157d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:23.977389 32499 raft_consensus.cc:359] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.977432 32499 raft_consensus.cc:385] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:23.977453 32499 raft_consensus.cc:740] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 67879ead5baf453696f6ce534106157d, State: Initialized, Role: FOLLOWER
I20251025 14:08:23.977509 32499 consensus_queue.cc:260] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.977548 32499 raft_consensus.cc:399] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:23.977573 32499 raft_consensus.cc:493] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:23.977592 32499 raft_consensus.cc:3060] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:23.978026 32499 raft_consensus.cc:515] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.978087 32499 leader_election.cc:304] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 67879ead5baf453696f6ce534106157d; no voters:
I20251025 14:08:23.978142 32499 leader_election.cc:290] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:23.978165 32502 raft_consensus.cc:2804] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:23.978235 32502 raft_consensus.cc:697] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 1 LEADER]: Becoming Leader. State: Replica: 67879ead5baf453696f6ce534106157d, State: Running, Role: LEADER
I20251025 14:08:23.978268 32499 ts_tablet_manager.cc:1434] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:23.978281 32502 consensus_queue.cc:237] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } }
I20251025 14:08:23.978669 32332 catalog_manager.cc:5649] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d reported cstate change: term changed from 0 to 1, leader changed from <none> to 67879ead5baf453696f6ce534106157d (127.30.194.193). New cstate: current_term: 1 leader_uuid: "67879ead5baf453696f6ce534106157d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "67879ead5baf453696f6ce534106157d" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37115 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:24.433153 32420 txn_status_manager.cc:1391] automatically aborted stale txn (ID 0) past 0.396s from last keepalive heartbeat (effective timeout is 0.300s)
W20251025 14:08:24.433995 32531 tablet_replica.cc:1306] Aborted: operation has been aborted: cancelling pending write operations [suppressed 1 similar messages]
I20251025 14:08:25.552628 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:25.554900 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:25.555009 31499 tablet_replica.cc:333] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d: stopping tablet replica
I20251025 14:08:25.555078 31499 raft_consensus.cc:2243] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:25.555128 31499 raft_consensus.cc:2272] T ee039bb3afe646b5b7432cdd73084118 P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:25.555397 31499 tablet_replica.cc:333] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d: stopping tablet replica
I20251025 14:08:25.555433 31499 raft_consensus.cc:2243] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:25.555459 31499 raft_consensus.cc:2272] T fb75415940714677aed14a2b87cc3f50 P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:25.555560 31499 tablet_replica.cc:333] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d: stopping tablet replica
I20251025 14:08:25.555590 31499 raft_consensus.cc:2243] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:25.555622 31499 raft_consensus.cc:2272] T 39a201448eac4f2fbc325917560b2a6a P 67879ead5baf453696f6ce534106157d [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:25.567077 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:25.568151 31499 master.cc:561] Master@127.30.194.254:46835 shutting down...
I20251025 14:08:25.570714 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:25.570773 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:25.570793 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 41891c3f79b94625973e9e8771d49e5c: stopping tablet replica
I20251025 14:08:25.582105 31499 master.cc:583] Master@127.30.194.254:46835 shutdown complete.
[ OK ] TxnCommitITest.TestBackgroundAborts (2732 ms)
[ RUN ] TxnCommitITest.TestCommitWhileDeletingTxnStatusManager
I20251025 14:08:25.585564 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:32791
I20251025 14:08:25.585709 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:25.586678 32542 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:25.586740 32543 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:25.586737 32545 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:25.586922 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:25.587023 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:25.587054 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:25.587070 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401305587070 us; error 0 us; skew 500 ppm
I20251025 14:08:25.587599 31499 webserver.cc:492] Webserver started at http://127.30.194.254:39351/ using document root <none> and password file <none>
I20251025 14:08:25.587680 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:25.587724 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:25.587770 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:25.588009 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "e787f49656034dcd8db98156a4305fa6"
format_stamp: "Formatted at 2025-10-25 14:08:25 on dist-test-slave-v4l2"
I20251025 14:08:25.588838 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.001s
I20251025 14:08:25.589330 32550 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:25.589449 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:25.589494 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "e787f49656034dcd8db98156a4305fa6"
format_stamp: "Formatted at 2025-10-25 14:08:25 on dist-test-slave-v4l2"
I20251025 14:08:25.589540 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:25.606884 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:25.607053 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:25.609808 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:32791
I20251025 14:08:25.611537 32612 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:32791 every 8 connection(s)
I20251025 14:08:25.611634 32613 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:25.612568 32613 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6: Bootstrap starting.
I20251025 14:08:25.612797 32613 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:25.613256 32613 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6: No bootstrap required, opened a new log
I20251025 14:08:25.613372 32613 raft_consensus.cc:359] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e787f49656034dcd8db98156a4305fa6" member_type: VOTER }
I20251025 14:08:25.613416 32613 raft_consensus.cc:385] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:25.613430 32613 raft_consensus.cc:740] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e787f49656034dcd8db98156a4305fa6, State: Initialized, Role: FOLLOWER
I20251025 14:08:25.613466 32613 consensus_queue.cc:260] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e787f49656034dcd8db98156a4305fa6" member_type: VOTER }
I20251025 14:08:25.613510 32613 raft_consensus.cc:399] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:25.613534 32613 raft_consensus.cc:493] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:25.613554 32613 raft_consensus.cc:3060] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:25.613967 32613 raft_consensus.cc:515] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e787f49656034dcd8db98156a4305fa6" member_type: VOTER }
I20251025 14:08:25.614017 32613 leader_election.cc:304] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e787f49656034dcd8db98156a4305fa6; no voters:
I20251025 14:08:25.614101 32613 leader_election.cc:290] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:25.614164 32616 raft_consensus.cc:2804] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:25.614231 32613 sys_catalog.cc:565] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:25.614305 32616 raft_consensus.cc:697] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 1 LEADER]: Becoming Leader. State: Replica: e787f49656034dcd8db98156a4305fa6, State: Running, Role: LEADER
I20251025 14:08:25.614356 32616 consensus_queue.cc:237] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e787f49656034dcd8db98156a4305fa6" member_type: VOTER }
I20251025 14:08:25.614576 32617 sys_catalog.cc:455] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "e787f49656034dcd8db98156a4305fa6" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e787f49656034dcd8db98156a4305fa6" member_type: VOTER } }
I20251025 14:08:25.614584 32618 sys_catalog.cc:455] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e787f49656034dcd8db98156a4305fa6. Latest consensus state: current_term: 1 leader_uuid: "e787f49656034dcd8db98156a4305fa6" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e787f49656034dcd8db98156a4305fa6" member_type: VOTER } }
I20251025 14:08:25.614709 32617 sys_catalog.cc:458] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:25.614723 32618 sys_catalog.cc:458] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:25.614883 32623 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:25.615020 32623 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:25.615531 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:25.615535 32623 catalog_manager.cc:1357] Generated new cluster ID: f7a7f4d825734730ba75076744d53e11
I20251025 14:08:25.615604 32623 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:25.630971 32623 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:25.631433 32623 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:25.641921 32623 catalog_manager.cc:6022] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6: Generated new TSK 0
I20251025 14:08:25.641995 32623 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:25.646113 32567 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60238:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:25.647298 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:25.648365 32640 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:25.648391 32641 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:25.648442 32643 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:25.648614 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:25.648697 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:25.648730 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:25.648744 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401305648744 us; error 0 us; skew 500 ppm
I20251025 14:08:25.649252 31499 webserver.cc:492] Webserver started at http://127.30.194.193:45639/ using document root <none> and password file <none>
I20251025 14:08:25.649315 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:25.649348 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:25.649380 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:25.649622 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "c84d8736e9404f70a3e1eb61f6a60100"
format_stamp: "Formatted at 2025-10-25 14:08:25 on dist-test-slave-v4l2"
I20251025 14:08:25.650434 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:25.650817 32648 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:25.650934 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:25.650975 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "c84d8736e9404f70a3e1eb61f6a60100"
format_stamp: "Formatted at 2025-10-25 14:08:25 on dist-test-slave-v4l2"
I20251025 14:08:25.651018 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:25.662642 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:25.662828 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:25.663123 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:25.663156 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:25.663185 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:25.663220 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:25.666594 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:36561
I20251025 14:08:25.666627 32713 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:36561 every 8 connection(s)
I20251025 14:08:25.666989 32714 heartbeater.cc:344] Connected to a master server at 127.30.194.254:32791
I20251025 14:08:25.667047 32714 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:25.667146 32714 heartbeater.cc:507] Master 127.30.194.254:32791 requested a full tablet report, sending...
I20251025 14:08:25.667366 32567 ts_manager.cc:194] Registered new tserver with Master: c84d8736e9404f70a3e1eb61f6a60100 (127.30.194.193:36561)
I20251025 14:08:25.667883 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001119012s
I20251025 14:08:25.668104 32567 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:60240
I20251025 14:08:26.651067 32567 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60254:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:26.654994 32678 tablet_service.cc:1505] Processing CreateTablet for tablet 98dcb870538445f9b99ff87b158bc96d (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=a3945e71fe9f46a89166256f51aba733]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:26.655112 32678 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 98dcb870538445f9b99ff87b158bc96d. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:26.656199 32734 tablet_bootstrap.cc:492] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: Bootstrap starting.
I20251025 14:08:26.656504 32734 tablet_bootstrap.cc:654] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:26.656953 32734 tablet_bootstrap.cc:492] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: No bootstrap required, opened a new log
I20251025 14:08:26.657029 32734 ts_tablet_manager.cc:1403] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:26.657191 32734 raft_consensus.cc:359] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.657243 32734 raft_consensus.cc:385] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:26.657266 32734 raft_consensus.cc:740] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c84d8736e9404f70a3e1eb61f6a60100, State: Initialized, Role: FOLLOWER
I20251025 14:08:26.657323 32734 consensus_queue.cc:260] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.657366 32734 raft_consensus.cc:399] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:26.657388 32734 raft_consensus.cc:493] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:26.657418 32734 raft_consensus.cc:3060] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:26.657925 32734 raft_consensus.cc:515] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.658001 32734 leader_election.cc:304] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c84d8736e9404f70a3e1eb61f6a60100; no voters:
I20251025 14:08:26.658123 32734 leader_election.cc:290] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:26.658154 32736 raft_consensus.cc:2804] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:26.658270 32734 ts_tablet_manager.cc:1434] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:26.658298 32714 heartbeater.cc:499] Master 127.30.194.254:32791 was elected leader, sending a full tablet report...
I20251025 14:08:26.658314 32736 raft_consensus.cc:697] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 1 LEADER]: Becoming Leader. State: Replica: c84d8736e9404f70a3e1eb61f6a60100, State: Running, Role: LEADER
I20251025 14:08:26.658452 32736 consensus_queue.cc:237] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.658646 32738 tablet_replica.cc:442] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: TxnStatusTablet state changed. Reason: New leader c84d8736e9404f70a3e1eb61f6a60100. Latest consensus state: current_term: 1 leader_uuid: "c84d8736e9404f70a3e1eb61f6a60100" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } } }
I20251025 14:08:26.658646 32737 tablet_replica.cc:442] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "c84d8736e9404f70a3e1eb61f6a60100" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } } }
I20251025 14:08:26.658753 32738 tablet_replica.cc:445] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:26.658885 32567 catalog_manager.cc:5649] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 reported cstate change: term changed from 0 to 1, leader changed from <none> to c84d8736e9404f70a3e1eb61f6a60100 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c84d8736e9404f70a3e1eb61f6a60100" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:26.658951 32737 tablet_replica.cc:445] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:26.659097 32741 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:26.659171 32741 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:26.701833 31499 test_util.cc:276] Using random seed: 858873862
I20251025 14:08:26.705910 32567 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60286:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:26.707285 32678 tablet_service.cc:1505] Processing CreateTablet for tablet d25d968955c143dfb4b56d16d7964fda (DEFAULT_TABLE table=test-workload [id=5ff9bf2f20844c9d837d7c2864faca06]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:26.707317 32677 tablet_service.cc:1505] Processing CreateTablet for tablet 4cee3b771028466a8bfd022454e4999e (DEFAULT_TABLE table=test-workload [id=5ff9bf2f20844c9d837d7c2864faca06]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:26.707396 32678 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d25d968955c143dfb4b56d16d7964fda. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:26.707501 32677 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 4cee3b771028466a8bfd022454e4999e. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:26.708364 32734 tablet_bootstrap.cc:492] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100: Bootstrap starting.
I20251025 14:08:26.708818 32734 tablet_bootstrap.cc:654] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:26.709333 32734 tablet_bootstrap.cc:492] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100: No bootstrap required, opened a new log
I20251025 14:08:26.709372 32734 ts_tablet_manager.cc:1403] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:26.709502 32734 raft_consensus.cc:359] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.709544 32734 raft_consensus.cc:385] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:26.709558 32734 raft_consensus.cc:740] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c84d8736e9404f70a3e1eb61f6a60100, State: Initialized, Role: FOLLOWER
I20251025 14:08:26.709589 32734 consensus_queue.cc:260] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.709632 32734 raft_consensus.cc:399] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:26.709648 32734 raft_consensus.cc:493] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:26.709666 32734 raft_consensus.cc:3060] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:26.710088 32734 raft_consensus.cc:515] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.710139 32734 leader_election.cc:304] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c84d8736e9404f70a3e1eb61f6a60100; no voters:
I20251025 14:08:26.710176 32734 leader_election.cc:290] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:26.710220 32738 raft_consensus.cc:2804] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:26.710259 32734 ts_tablet_manager.cc:1434] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:26.710278 32738 raft_consensus.cc:697] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 1 LEADER]: Becoming Leader. State: Replica: c84d8736e9404f70a3e1eb61f6a60100, State: Running, Role: LEADER
I20251025 14:08:26.710319 32734 tablet_bootstrap.cc:492] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100: Bootstrap starting.
I20251025 14:08:26.710340 32738 consensus_queue.cc:237] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.710707 32734 tablet_bootstrap.cc:654] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:26.710754 32567 catalog_manager.cc:5649] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 reported cstate change: term changed from 0 to 1, leader changed from <none> to c84d8736e9404f70a3e1eb61f6a60100 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c84d8736e9404f70a3e1eb61f6a60100" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:26.711237 32734 tablet_bootstrap.cc:492] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100: No bootstrap required, opened a new log
I20251025 14:08:26.711292 32734 ts_tablet_manager.cc:1403] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:26.711428 32734 raft_consensus.cc:359] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.711495 32734 raft_consensus.cc:385] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:26.711522 32734 raft_consensus.cc:740] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c84d8736e9404f70a3e1eb61f6a60100, State: Initialized, Role: FOLLOWER
I20251025 14:08:26.711560 32734 consensus_queue.cc:260] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.711591 32734 raft_consensus.cc:399] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:26.711613 32734 raft_consensus.cc:493] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:26.711633 32734 raft_consensus.cc:3060] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:26.712064 32734 raft_consensus.cc:515] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.712105 32734 leader_election.cc:304] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c84d8736e9404f70a3e1eb61f6a60100; no voters:
I20251025 14:08:26.712137 32734 leader_election.cc:290] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:26.712179 32738 raft_consensus.cc:2804] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:26.712246 32738 raft_consensus.cc:697] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 1 LEADER]: Becoming Leader. State: Replica: c84d8736e9404f70a3e1eb61f6a60100, State: Running, Role: LEADER
I20251025 14:08:26.712277 32738 consensus_queue.cc:237] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } }
I20251025 14:08:26.712311 32734 ts_tablet_manager.cc:1434] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:26.712648 32567 catalog_manager.cc:5649] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 reported cstate change: term changed from 0 to 1, leader changed from <none> to c84d8736e9404f70a3e1eb61f6a60100 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c84d8736e9404f70a3e1eb61f6a60100" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c84d8736e9404f70a3e1eb61f6a60100" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 36561 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:26.782796 31499 tablet_replica.cc:333] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: stopping tablet replica
I20251025 14:08:26.782887 31499 raft_consensus.cc:2243] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:26.782925 31499 raft_consensus.cc:2272] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:26.833434 31499 txn_status_manager.cc:765] Waiting for 1 task(s) to stop
I20251025 14:08:26.833600 31499 ts_tablet_manager.cc:1916] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20251025 14:08:26.835155 31499 ts_tablet_manager.cc:1929] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.5
I20251025 14:08:26.835242 31499 log.cc:1199] T 98dcb870538445f9b99ff87b158bc96d P c84d8736e9404f70a3e1eb61f6a60100: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitWhileDeletingTxnStatusManager.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/98dcb870538445f9b99ff87b158bc96d
W20251025 14:08:27.005857 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 6) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.005944 32596 rpcz_store.cc:269] 1025 14:08:26.921216 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:26.921246 (+ 30us) service_pool.cc:225] Handling call
1025 14:08:27.005826 (+ 84580us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.083387 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 7) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.083465 32595 rpcz_store.cc:269] 1025 14:08:26.997512 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:26.997541 (+ 29us) service_pool.cc:225] Handling call
1025 14:08:27.083352 (+ 85811us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.148947 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 8) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.149047 32596 rpcz_store.cc:269] 1025 14:08:27.074541 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.074577 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:27.148932 (+ 74355us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.237443 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 9) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.237524 32596 rpcz_store.cc:269] 1025 14:08:27.150451 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.150478 (+ 27us) service_pool.cc:225] Handling call
1025 14:08:27.237428 (+ 86950us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.301055 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 10) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.301138 32595 rpcz_store.cc:269] 1025 14:08:27.226840 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.226868 (+ 28us) service_pool.cc:225] Handling call
1025 14:08:27.301042 (+ 74174us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.383831 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 11) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.383911 32595 rpcz_store.cc:269] 1025 14:08:27.302553 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.302719 (+ 166us) service_pool.cc:225] Handling call
1025 14:08:27.383817 (+ 81098us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.460459 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 12) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.460533 32596 rpcz_store.cc:269] 1025 14:08:27.378910 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.378947 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:27.460439 (+ 81492us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.534224 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 13) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.534299 32595 rpcz_store.cc:269] 1025 14:08:27.455556 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.455585 (+ 29us) service_pool.cc:225] Handling call
1025 14:08:27.534210 (+ 78625us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.617653 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 14) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.617750 32596 rpcz_store.cc:269] 1025 14:08:27.532249 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.532308 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:27.617634 (+ 85326us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.684832 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 15) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.684909 32595 rpcz_store.cc:269] 1025 14:08:27.608940 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.608973 (+ 33us) service_pool.cc:225] Handling call
1025 14:08:27.684819 (+ 75846us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.768062 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 16) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.768167 32595 rpcz_store.cc:269] 1025 14:08:27.685229 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.685258 (+ 29us) service_pool.cc:225] Handling call
1025 14:08:27.768050 (+ 82792us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.835839 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 17) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.835927 32596 rpcz_store.cc:269] 1025 14:08:27.761679 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.761723 (+ 44us) service_pool.cc:225] Handling call
1025 14:08:27.835823 (+ 74100us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.919919 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 18) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.920023 32596 rpcz_store.cc:269] 1025 14:08:27.837365 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.837411 (+ 46us) service_pool.cc:225] Handling call
1025 14:08:27.919901 (+ 82490us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:27.998061 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 19) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:27.998152 32595 rpcz_store.cc:269] 1025 14:08:27.913796 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.913847 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:27.998049 (+ 84202us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.070719 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 20) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.070806 32596 rpcz_store.cc:269] 1025 14:08:27.990611 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:27.990670 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:28.070704 (+ 80034us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.145359 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 21) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.145440 32595 rpcz_store.cc:269] 1025 14:08:28.067727 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.067757 (+ 30us) service_pool.cc:225] Handling call
1025 14:08:28.145345 (+ 77588us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.229740 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 22) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.229821 32596 rpcz_store.cc:269] 1025 14:08:28.144463 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.144495 (+ 32us) service_pool.cc:225] Handling call
1025 14:08:28.229728 (+ 85233us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.303524 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 23) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.303601 32595 rpcz_store.cc:269] 1025 14:08:28.221007 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.221056 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:28.303510 (+ 82454us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.379747 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 24) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.379829 32596 rpcz_store.cc:269] 1025 14:08:28.297389 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.297418 (+ 29us) service_pool.cc:225] Handling call
1025 14:08:28.379736 (+ 82318us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.459141 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 25) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.459209 32595 rpcz_store.cc:269] 1025 14:08:28.374144 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.374204 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:28.459128 (+ 84924us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.536598 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 26) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.536681 32596 rpcz_store.cc:269] 1025 14:08:28.450453 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.450500 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:28.536585 (+ 86085us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.603061 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 27) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.603140 32595 rpcz_store.cc:269] 1025 14:08:28.526924 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.526958 (+ 34us) service_pool.cc:225] Handling call
1025 14:08:28.603049 (+ 76091us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.678265 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 28) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.678347 32595 rpcz_store.cc:269] 1025 14:08:28.603364 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.603397 (+ 33us) service_pool.cc:225] Handling call
1025 14:08:28.678250 (+ 74853us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.764660 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 29) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.764753 32595 rpcz_store.cc:269] 1025 14:08:28.679733 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.679784 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:28.764647 (+ 84863us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.834674 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 30) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.834760 32596 rpcz_store.cc:269] 1025 14:08:28.756124 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.756174 (+ 50us) service_pool.cc:225] Handling call
1025 14:08:28.834663 (+ 78489us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.912596 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 31) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.912673 32595 rpcz_store.cc:269] 1025 14:08:28.832411 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.832468 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:28.912582 (+ 80114us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:28.991557 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 32) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:28.991652 32596 rpcz_store.cc:269] 1025 14:08:28.909469 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.909509 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:28.991546 (+ 82037us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.073385 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 33) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.073459 32595 rpcz_store.cc:269] 1025 14:08:28.986260 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:28.986296 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:29.073375 (+ 87079us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.137491 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 34) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.137569 32596 rpcz_store.cc:269] 1025 14:08:29.063263 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.063315 (+ 52us) service_pool.cc:225] Handling call
1025 14:08:29.137477 (+ 74162us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.223721 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 35) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.223800 32596 rpcz_store.cc:269] 1025 14:08:29.138957 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.138993 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:29.223708 (+ 84715us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.298971 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 36) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.299068 32595 rpcz_store.cc:269] 1025 14:08:29.215443 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.215494 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:29.298956 (+ 83462us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.368108 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 37) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.368185 32596 rpcz_store.cc:269] 1025 14:08:29.292332 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.292375 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:29.368095 (+ 75720us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.457209 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 38) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.457293 32596 rpcz_store.cc:269] 1025 14:08:29.369524 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.369558 (+ 34us) service_pool.cc:225] Handling call
1025 14:08:29.457197 (+ 87639us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.525044 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 39) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.525123 32595 rpcz_store.cc:269] 1025 14:08:29.445943 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.445990 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:29.525032 (+ 79042us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.602581 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 40) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.602653 32596 rpcz_store.cc:269] 1025 14:08:29.522999 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.523059 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:29.602569 (+ 79510us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.683938 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 41) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.684016 32595 rpcz_store.cc:269] 1025 14:08:29.599721 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.599761 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:29.683926 (+ 84165us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.756139 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 42) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.756220 32596 rpcz_store.cc:269] 1025 14:08:29.676307 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.676360 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:29.756129 (+ 79769us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.837981 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 43) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.838057 32595 rpcz_store.cc:269] 1025 14:08:29.752682 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.752732 (+ 50us) service_pool.cc:225] Handling call
1025 14:08:29.837970 (+ 85238us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.911065 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 44) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.911154 32596 rpcz_store.cc:269] 1025 14:08:29.829895 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.829936 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:29.911052 (+ 81116us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:29.990020 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 45) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:29.990105 32595 rpcz_store.cc:269] 1025 14:08:29.906632 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.906679 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:29.990006 (+ 83327us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.063740 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 46) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.063835 32596 rpcz_store.cc:269] 1025 14:08:29.983802 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:29.983863 (+ 61us) service_pool.cc:225] Handling call
1025 14:08:30.063723 (+ 79860us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.145745 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 47) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.145816 32595 rpcz_store.cc:269] 1025 14:08:30.060736 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.060787 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:30.145733 (+ 84946us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.216449 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 48) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.216526 32596 rpcz_store.cc:269] 1025 14:08:30.137457 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.137514 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:30.216435 (+ 78921us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.295133 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 49) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.295212 32595 rpcz_store.cc:269] 1025 14:08:30.214475 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.214536 (+ 61us) service_pool.cc:225] Handling call
1025 14:08:30.295121 (+ 80585us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.369669 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 50) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.369740 32596 rpcz_store.cc:269] 1025 14:08:30.291157 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.291211 (+ 54us) service_pool.cc:225] Handling call
1025 14:08:30.369658 (+ 78447us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.452972 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 51) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.453064 32595 rpcz_store.cc:269] 1025 14:08:30.367831 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.367875 (+ 44us) service_pool.cc:225] Handling call
1025 14:08:30.452961 (+ 85086us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.518867 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 52) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.518952 32596 rpcz_store.cc:269] 1025 14:08:30.444362 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.444398 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:30.518853 (+ 74455us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.605391 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 53) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.605484 32596 rpcz_store.cc:269] 1025 14:08:30.520376 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.520414 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:30.605378 (+ 84964us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.674760 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 54) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.674830 32595 rpcz_store.cc:269] 1025 14:08:30.596731 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.596781 (+ 50us) service_pool.cc:225] Handling call
1025 14:08:30.674748 (+ 77967us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.751376 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 55) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.751451 32596 rpcz_store.cc:269] 1025 14:08:30.672975 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.673018 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:30.751365 (+ 78347us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.836827 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 56) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.836908 32595 rpcz_store.cc:269] 1025 14:08:30.749463 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.749528 (+ 65us) service_pool.cc:225] Handling call
1025 14:08:30.836815 (+ 87287us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.907310 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 57) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.907393 32596 rpcz_store.cc:269] 1025 14:08:30.826074 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.826131 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:30.907298 (+ 81167us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:30.976753 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 58) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:30.976837 32595 rpcz_store.cc:269] 1025 14:08:30.902584 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.902627 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:30.976740 (+ 74113us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.052949 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 59) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.053052 32595 rpcz_store.cc:269] 1025 14:08:30.978314 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:30.978370 (+ 56us) service_pool.cc:225] Handling call
1025 14:08:31.052936 (+ 74566us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.138319 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 60) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.138413 32595 rpcz_store.cc:269] 1025 14:08:31.054533 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.054579 (+ 46us) service_pool.cc:225] Handling call
1025 14:08:31.138307 (+ 83728us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.212139 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 61) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.212219 32596 rpcz_store.cc:269] 1025 14:08:31.130979 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.131059 (+ 80us) service_pool.cc:225] Handling call
1025 14:08:31.212126 (+ 81067us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.291579 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 62) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.291658 32595 rpcz_store.cc:269] 1025 14:08:31.208115 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.208185 (+ 70us) service_pool.cc:225] Handling call
1025 14:08:31.291566 (+ 83381us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.369153 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 63) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.369243 32596 rpcz_store.cc:269] 1025 14:08:31.284803 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.284861 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:31.369143 (+ 84282us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.440922 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 64) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.441031 32595 rpcz_store.cc:269] 1025 14:08:31.361274 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.361311 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:31.440908 (+ 79597us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.522292 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 65) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.522396 32596 rpcz_store.cc:269] 1025 14:08:31.437942 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.437988 (+ 46us) service_pool.cc:225] Handling call
1025 14:08:31.522277 (+ 84289us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.594081 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 66) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.594157 32595 rpcz_store.cc:269] 1025 14:08:31.514700 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.514737 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:31.594069 (+ 79332us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.672191 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 67) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.672307 32596 rpcz_store.cc:269] 1025 14:08:31.590948 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.590997 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:31.672125 (+ 81128us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.742556 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 68) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.742646 32595 rpcz_store.cc:269] 1025 14:08:31.667765 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.667797 (+ 32us) service_pool.cc:225] Handling call
1025 14:08:31.742541 (+ 74744us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.818945 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 69) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.819022 32595 rpcz_store.cc:269] 1025 14:08:31.744153 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.744219 (+ 66us) service_pool.cc:225] Handling call
1025 14:08:31.818932 (+ 74713us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.898775 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 70) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.898874 32595 rpcz_store.cc:269] 1025 14:08:31.820388 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.820422 (+ 34us) service_pool.cc:225] Handling call
1025 14:08:31.898757 (+ 78335us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:31.981977 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 71) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:31.982059 32596 rpcz_store.cc:269] 1025 14:08:31.896752 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.896793 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:31.981964 (+ 85171us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.058595 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 72) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.058666 32595 rpcz_store.cc:269] 1025 14:08:31.973431 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:31.973474 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:32.058583 (+ 85109us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.128911 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 73) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.128983 32596 rpcz_store.cc:269] 1025 14:08:32.049751 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.049786 (+ 35us) service_pool.cc:225] Handling call
1025 14:08:32.128897 (+ 79111us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.207707 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 74) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.207787 32595 rpcz_store.cc:269] 1025 14:08:32.126426 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.126469 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:32.207695 (+ 81226us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.278362 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 75) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.278443 32596 rpcz_store.cc:269] 1025 14:08:32.203649 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.203692 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:32.278348 (+ 74656us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.355441 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 76) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.355517 32596 rpcz_store.cc:269] 1025 14:08:32.279924 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.279973 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:32.355429 (+ 75456us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.438977 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 77) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.439070 32596 rpcz_store.cc:269] 1025 14:08:32.356908 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.356945 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:32.438963 (+ 82018us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.517460 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 78) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.517557 32595 rpcz_store.cc:269] 1025 14:08:32.433341 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.433386 (+ 45us) service_pool.cc:225] Handling call
1025 14:08:32.517446 (+ 84060us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.597906 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 79) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.597982 32596 rpcz_store.cc:269] 1025 14:08:32.509738 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.509778 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:32.597895 (+ 88117us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.671182 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 80) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.671257 32595 rpcz_store.cc:269] 1025 14:08:32.586103 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.586159 (+ 56us) service_pool.cc:225] Handling call
1025 14:08:32.671171 (+ 85012us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.744627 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 81) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.744704 32596 rpcz_store.cc:269] 1025 14:08:32.662503 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.662552 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:32.744615 (+ 82063us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.813427 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 82) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.813531 32595 rpcz_store.cc:269] 1025 14:08:32.738889 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.738930 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:32.813404 (+ 74474us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.897889 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 83) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.897979 32595 rpcz_store.cc:269] 1025 14:08:32.815002 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.815051 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:32.897875 (+ 82824us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:32.975502 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 84) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:32.975585 32596 rpcz_store.cc:269] 1025 14:08:32.891304 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.891340 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:32.975491 (+ 84151us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.051263 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 85) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.051352 32595 rpcz_store.cc:269] 1025 14:08:32.968644 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:32.968711 (+ 67us) service_pool.cc:225] Handling call
1025 14:08:33.051250 (+ 82539us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.129774 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 86) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.129855 32596 rpcz_store.cc:269] 1025 14:08:33.045272 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.045325 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:33.129763 (+ 84438us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.201471 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 87) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.201555 32595 rpcz_store.cc:269] 1025 14:08:33.122054 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.122104 (+ 50us) service_pool.cc:225] Handling call
1025 14:08:33.201460 (+ 79356us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.279072 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 88) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.279151 32596 rpcz_store.cc:269] 1025 14:08:33.198388 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.198434 (+ 46us) service_pool.cc:225] Handling call
1025 14:08:33.279059 (+ 80625us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.359774 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 89) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.359854 32595 rpcz_store.cc:269] 1025 14:08:33.275120 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.275175 (+ 55us) service_pool.cc:225] Handling call
1025 14:08:33.359761 (+ 84586us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.430649 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 90) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.430737 32596 rpcz_store.cc:269] 1025 14:08:33.351922 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.351971 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:33.430638 (+ 78667us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.513866 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 91) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.513949 32595 rpcz_store.cc:269] 1025 14:08:33.428523 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.428580 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:33.513853 (+ 85273us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.580245 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 92) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.580358 32596 rpcz_store.cc:269] 1025 14:08:33.505386 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.505433 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:33.580230 (+ 74797us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.668350 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 93) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.668426 32596 rpcz_store.cc:269] 1025 14:08:33.581873 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.581939 (+ 66us) service_pool.cc:225] Handling call
1025 14:08:33.668337 (+ 86398us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.735705 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 94) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.735787 32595 rpcz_store.cc:269] 1025 14:08:33.658166 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.658202 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:33.735692 (+ 77490us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.821753 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 95) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.821837 32596 rpcz_store.cc:269] 1025 14:08:33.735080 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.735122 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:33.821739 (+ 86617us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.898429 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 96) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.898507 32595 rpcz_store.cc:269] 1025 14:08:33.811519 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.811559 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:33.898403 (+ 86844us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:33.968961 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 97) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:33.969076 32596 rpcz_store.cc:269] 1025 14:08:33.888380 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.888416 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:33.968946 (+ 80530us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.040856 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 98) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.040939 32595 rpcz_store.cc:269] 1025 14:08:33.965090 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:33.965145 (+ 55us) service_pool.cc:225] Handling call
1025 14:08:34.040843 (+ 75698us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.120810 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 99) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.120893 32595 rpcz_store.cc:269] 1025 14:08:34.042325 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.042372 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:34.120800 (+ 78428us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.200040 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 100) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.200132 32596 rpcz_store.cc:269] 1025 14:08:34.118777 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.118840 (+ 63us) service_pool.cc:225] Handling call
1025 14:08:34.200027 (+ 81187us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.279470 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 101) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.279551 32595 rpcz_store.cc:269] 1025 14:08:34.195515 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.195569 (+ 54us) service_pool.cc:225] Handling call
1025 14:08:34.279458 (+ 83889us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.346972 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 102) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.347059 32596 rpcz_store.cc:269] 1025 14:08:34.272741 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.272789 (+ 48us) service_pool.cc:225] Handling call
1025 14:08:34.346955 (+ 74166us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.433585 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 103) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.433688 32596 rpcz_store.cc:269] 1025 14:08:34.348536 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.348576 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:34.433574 (+ 84998us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.512784 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 104) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.512862 32595 rpcz_store.cc:269] 1025 14:08:34.424961 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.425048 (+ 87us) service_pool.cc:225] Handling call
1025 14:08:34.512769 (+ 87721us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.587121 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 105) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.587230 32596 rpcz_store.cc:269] 1025 14:08:34.501270 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.501321 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:34.587103 (+ 85782us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.657842 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 106) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.657917 32595 rpcz_store.cc:269] 1025 14:08:34.578407 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.578466 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:34.657830 (+ 79364us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.735525 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 107) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.735621 32596 rpcz_store.cc:269] 1025 14:08:34.654804 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.654848 (+ 44us) service_pool.cc:225] Handling call
1025 14:08:34.735510 (+ 80662us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.814081 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 108) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.814168 32595 rpcz_store.cc:269] 1025 14:08:34.731602 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.731681 (+ 79us) service_pool.cc:225] Handling call
1025 14:08:34.814069 (+ 82388us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.894316 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 109) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.894402 32596 rpcz_store.cc:269] 1025 14:08:34.808299 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.808358 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:34.894304 (+ 85946us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:34.962990 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 110) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:34.963069 32595 rpcz_store.cc:269] 1025 14:08:34.884732 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.884791 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:34.962978 (+ 78187us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.036302 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 111) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.036391 32596 rpcz_store.cc:269] 1025 14:08:34.961049 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:34.961100 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:35.036288 (+ 75188us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.124150 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 112) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.124246 32596 rpcz_store.cc:269] 1025 14:08:35.037893 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.037947 (+ 54us) service_pool.cc:225] Handling call
1025 14:08:35.124134 (+ 86187us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.196970 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 113) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.197080 32595 rpcz_store.cc:269] 1025 14:08:35.114341 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.114394 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:35.196960 (+ 82566us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.276549 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 114) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.276631 32596 rpcz_store.cc:269] 1025 14:08:35.190778 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.190831 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:35.276536 (+ 85705us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.343318 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 115) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.343405 32595 rpcz_store.cc:269] 1025 14:08:35.267770 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.267821 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:35.343305 (+ 75484us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.427361 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 116) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.427448 32595 rpcz_store.cc:269] 1025 14:08:35.344861 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.344894 (+ 33us) service_pool.cc:225] Handling call
1025 14:08:35.427346 (+ 82452us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.498173 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 117) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.498260 32596 rpcz_store.cc:269] 1025 14:08:35.421291 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.421343 (+ 52us) service_pool.cc:225] Handling call
1025 14:08:35.498161 (+ 76818us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.585453 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 118) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.585559 32595 rpcz_store.cc:269] 1025 14:08:35.498073 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.498112 (+ 39us) service_pool.cc:225] Handling call
1025 14:08:35.585441 (+ 87329us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.656812 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 119) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.656898 32596 rpcz_store.cc:269] 1025 14:08:35.574874 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.574919 (+ 45us) service_pool.cc:225] Handling call
1025 14:08:35.656799 (+ 81880us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.725466 32722 meta_cache.cc:1015] Timed out: LookupRpc { table: 'kudu_system.kudu_transactions', partition-key: (RANGE (txn_id): 0), attempt: 1 } failed: LookupRpc timed out after deadline expired: GetTableLocations RPC to 127.30.194.254:32791 timed out after 0.000s (SENT)
W20251025 14:08:35.725626 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 120) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.725667 32595 rpcz_store.cc:269] 1025 14:08:35.651141 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.651203 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:35.725619 (+ 74416us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.808750 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 121) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.808868 32595 rpcz_store.cc:269] 1025 14:08:35.727091 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.727138 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:35.808732 (+ 81594us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.887851 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 122) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.887933 32596 rpcz_store.cc:269] 1025 14:08:35.803528 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.803598 (+ 70us) service_pool.cc:225] Handling call
1025 14:08:35.887839 (+ 84241us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:35.961123 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 123) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:35.961207 32595 rpcz_store.cc:269] 1025 14:08:35.880597 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.880654 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:35.961109 (+ 80455us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.042593 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 124) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.042675 32596 rpcz_store.cc:269] 1025 14:08:35.957550 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:35.957591 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:36.042582 (+ 84991us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.114856 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 125) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.114933 32595 rpcz_store.cc:269] 1025 14:08:36.033846 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.033884 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:36.114836 (+ 80952us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.187834 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 126) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.187932 32596 rpcz_store.cc:269] 1025 14:08:36.110328 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.110372 (+ 44us) service_pool.cc:225] Handling call
1025 14:08:36.187818 (+ 77446us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.271353 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 127) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.271435 32595 rpcz_store.cc:269] 1025 14:08:36.187514 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.187578 (+ 64us) service_pool.cc:225] Handling call
1025 14:08:36.271340 (+ 83762us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.352506 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 128) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.352586 32596 rpcz_store.cc:269] 1025 14:08:36.264596 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.264655 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:36.352494 (+ 87839us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.422158 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 129) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.422273 32595 rpcz_store.cc:269] 1025 14:08:36.341055 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.341093 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:36.422142 (+ 81049us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.497785 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 130) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.497869 32596 rpcz_store.cc:269] 1025 14:08:36.418243 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.418295 (+ 52us) service_pool.cc:225] Handling call
1025 14:08:36.497771 (+ 79476us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.582486 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 131) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.582561 32595 rpcz_store.cc:269] 1025 14:08:36.494820 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.494863 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:36.582473 (+ 87610us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.659497 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 132) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.659574 32596 rpcz_store.cc:269] 1025 14:08:36.571369 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.571423 (+ 54us) service_pool.cc:225] Handling call
1025 14:08:36.659486 (+ 88063us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.736737 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 133) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.736821 32595 rpcz_store.cc:269] 1025 14:08:36.648197 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.648231 (+ 34us) service_pool.cc:225] Handling call
1025 14:08:36.736723 (+ 88492us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.804430 32596 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 134) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.804507 32596 rpcz_store.cc:269] 1025 14:08:36.725663 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.725705 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:36.804419 (+ 78714us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:08:36.837009 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:36.839409 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:36.839533 31499 tablet_replica.cc:333] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100: stopping tablet replica
I20251025 14:08:36.839598 31499 raft_consensus.cc:2243] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:36.839650 31499 raft_consensus.cc:2272] T 4cee3b771028466a8bfd022454e4999e P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Raft consensus is shut down!
W20251025 14:08:36.840169 31499 mvcc.cc:118] aborting op with timestamp 7214699752583573504 in state 0; MVCC is closed
I20251025 14:08:36.840271 31499 tablet_replica.cc:333] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100: stopping tablet replica
I20251025 14:08:36.840299 31499 raft_consensus.cc:2243] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:36.840330 31499 raft_consensus.cc:2272] T d25d968955c143dfb4b56d16d7964fda P c84d8736e9404f70a3e1eb61f6a60100 [term 1 FOLLOWER]: Raft consensus is shut down!
W20251025 14:08:36.840499 31499 mvcc.cc:118] aborting op with timestamp 7214699752583610368 in state 0; MVCC is closed
W20251025 14:08:36.846334 32725 meta_cache.cc:302] tablet 98dcb870538445f9b99ff87b158bc96d: replica c84d8736e9404f70a3e1eb61f6a60100 (127.30.194.193:36561) has failed: Network error: Client connection negotiation failed: client connection to 127.30.194.193:36561: connect: Connection refused (error 111)
I20251025 14:08:36.852756 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:36.853883 31499 master.cc:561] Master@127.30.194.254:32791 shutting down...
W20251025 14:08:36.856846 32722 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:32791)
W20251025 14:08:36.865796 32597 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.GetTransactionState from 127.0.0.1:60274 (request call id 5) took 10030 ms (client timeout 9999 ms). Trace:
W20251025 14:08:36.865854 32597 rpcz_store.cc:269] 1025 14:08:26.835714 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:26.835751 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:36.865788 (+10030037us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.884428 32722 meta_cache.cc:1015] Timed out: LookupRpc { table: 'kudu_system.kudu_transactions', partition-key: (RANGE (txn_id): 0), attempt: 6 } failed: LookupRpc timed out after deadline expired: LookupRpc { table: 'kudu_system.kudu_transactions', partition-key: (RANGE (txn_id): 0), attempt: 6 } passed its deadline: Remote error: Service unavailable: service kudu.master.MasterService not registered on Master
W20251025 14:08:36.884600 32595 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 135) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:36.884668 32595 rpcz_store.cc:269] 1025 14:08:36.802453 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:36.802496 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:36.884586 (+ 82090us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:36.884797 32554 connection.cc:441] server connection from 127.0.0.1:60274 torn down before Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60274 (request call id 135) could send its response
I20251025 14:08:36.885725 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:36.885792 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:36.885811 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P e787f49656034dcd8db98156a4305fa6: stopping tablet replica
I20251025 14:08:36.897814 31499 master.cc:583] Master@127.30.194.254:32791 shutdown complete.
[ OK ] TxnCommitITest.TestCommitWhileDeletingTxnStatusManager (11315 ms)
[ RUN ] TxnCommitITest.TestCommitAfterDeletingParticipant
I20251025 14:08:36.901371 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:33731
I20251025 14:08:36.901510 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:36.902905 312 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:36.902989 309 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:36.902951 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:36.902992 310 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:36.903264 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:36.903301 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:36.903314 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401316903315 us; error 0 us; skew 500 ppm
I20251025 14:08:36.903836 31499 webserver.cc:492] Webserver started at http://127.30.194.254:39231/ using document root <none> and password file <none>
I20251025 14:08:36.903929 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:36.903973 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:36.904023 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:36.904294 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "d367adc9a06042da82faff4f54ca0510"
format_stamp: "Formatted at 2025-10-25 14:08:36 on dist-test-slave-v4l2"
I20251025 14:08:36.905220 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:36.906062 317 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:36.906328 31499 fs_manager.cc:730] Time spent opening block manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:36.906395 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "d367adc9a06042da82faff4f54ca0510"
format_stamp: "Formatted at 2025-10-25 14:08:36 on dist-test-slave-v4l2"
I20251025 14:08:36.906530 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:36.915685 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:36.915863 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:36.919493 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:33731
I20251025 14:08:36.920936 379 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:33731 every 8 connection(s)
I20251025 14:08:36.921125 380 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:36.922132 380 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: Bootstrap starting.
I20251025 14:08:36.922384 380 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:36.922819 380 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: No bootstrap required, opened a new log
I20251025 14:08:36.922926 380 raft_consensus.cc:359] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d367adc9a06042da82faff4f54ca0510" member_type: VOTER }
I20251025 14:08:36.922971 380 raft_consensus.cc:385] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:36.922986 380 raft_consensus.cc:740] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d367adc9a06042da82faff4f54ca0510, State: Initialized, Role: FOLLOWER
I20251025 14:08:36.923023 380 consensus_queue.cc:260] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d367adc9a06042da82faff4f54ca0510" member_type: VOTER }
I20251025 14:08:36.923066 380 raft_consensus.cc:399] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:36.923086 380 raft_consensus.cc:493] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:36.923100 380 raft_consensus.cc:3060] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:36.960706 380 raft_consensus.cc:515] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d367adc9a06042da82faff4f54ca0510" member_type: VOTER }
I20251025 14:08:36.960855 380 leader_election.cc:304] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d367adc9a06042da82faff4f54ca0510; no voters:
I20251025 14:08:36.961030 380 leader_election.cc:290] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:36.961110 383 raft_consensus.cc:2804] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:36.961216 380 sys_catalog.cc:565] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:36.961313 383 raft_consensus.cc:697] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 1 LEADER]: Becoming Leader. State: Replica: d367adc9a06042da82faff4f54ca0510, State: Running, Role: LEADER
I20251025 14:08:36.961437 383 consensus_queue.cc:237] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d367adc9a06042da82faff4f54ca0510" member_type: VOTER }
I20251025 14:08:36.961760 384 sys_catalog.cc:455] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "d367adc9a06042da82faff4f54ca0510" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d367adc9a06042da82faff4f54ca0510" member_type: VOTER } }
I20251025 14:08:36.961831 384 sys_catalog.cc:458] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:36.962028 385 sys_catalog.cc:455] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [sys.catalog]: SysCatalogTable state changed. Reason: New leader d367adc9a06042da82faff4f54ca0510. Latest consensus state: current_term: 1 leader_uuid: "d367adc9a06042da82faff4f54ca0510" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d367adc9a06042da82faff4f54ca0510" member_type: VOTER } }
I20251025 14:08:36.962113 385 sys_catalog.cc:458] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:36.962354 388 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:36.962513 388 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:36.962821 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:36.963591 388 catalog_manager.cc:1357] Generated new cluster ID: 04cac01cd2294dffad900e6444ba600a
I20251025 14:08:36.963640 388 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:36.979403 388 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:36.979818 388 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:36.986063 388 catalog_manager.cc:6022] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: Generated new TSK 0
I20251025 14:08:36.986137 388 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:36.989758 333 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:55126:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:36.994637 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:36.995821 407 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:36.995903 408 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:36.995968 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:36.995929 410 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:36.996145 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:36.996188 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:36.996212 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401316996211 us; error 0 us; skew 500 ppm
I20251025 14:08:36.996767 31499 webserver.cc:492] Webserver started at http://127.30.194.193:35497/ using document root <none> and password file <none>
I20251025 14:08:36.996867 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:36.996917 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:36.996965 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:36.997288 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "0839f45abe0040dbb6fac12abb50e59f"
format_stamp: "Formatted at 2025-10-25 14:08:36 on dist-test-slave-v4l2"
I20251025 14:08:36.998129 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:36.998560 415 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:36.998690 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:36.998731 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "0839f45abe0040dbb6fac12abb50e59f"
format_stamp: "Formatted at 2025-10-25 14:08:36 on dist-test-slave-v4l2"
I20251025 14:08:36.998776 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:37.008399 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:37.008584 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:37.008914 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:37.008948 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:37.008978 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:37.009058 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:37.012526 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:40421
I20251025 14:08:37.012557 485 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:40421 every 8 connection(s)
I20251025 14:08:37.012950 486 heartbeater.cc:344] Connected to a master server at 127.30.194.254:33731
I20251025 14:08:37.013029 486 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:37.013136 486 heartbeater.cc:507] Master 127.30.194.254:33731 requested a full tablet report, sending...
I20251025 14:08:37.013372 333 ts_manager.cc:194] Registered new tserver with Master: 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193:40421)
I20251025 14:08:37.013855 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001121083s
I20251025 14:08:37.014089 333 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:55132
I20251025 14:08:37.995038 333 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:55154:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:37.999246 445 tablet_service.cc:1505] Processing CreateTablet for tablet 383ba1220cc6495eb1e2299d9bf0e9b9 (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=5a07b76afe7d49e2a74feb0835165990]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:37.999370 445 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 383ba1220cc6495eb1e2299d9bf0e9b9. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:38.000617 501 tablet_bootstrap.cc:492] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: Bootstrap starting.
I20251025 14:08:38.001009 501 tablet_bootstrap.cc:654] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:38.001617 501 tablet_bootstrap.cc:492] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: No bootstrap required, opened a new log
I20251025 14:08:38.001668 501 ts_tablet_manager.cc:1403] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.001813 501 raft_consensus.cc:359] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.001863 501 raft_consensus.cc:385] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:38.001888 501 raft_consensus.cc:740] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0839f45abe0040dbb6fac12abb50e59f, State: Initialized, Role: FOLLOWER
I20251025 14:08:38.001945 501 consensus_queue.cc:260] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.001981 501 raft_consensus.cc:399] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:38.002002 501 raft_consensus.cc:493] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:38.002032 501 raft_consensus.cc:3060] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:38.002643 501 raft_consensus.cc:515] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.002708 501 leader_election.cc:304] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 0839f45abe0040dbb6fac12abb50e59f; no voters:
I20251025 14:08:38.002820 501 leader_election.cc:290] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:38.002853 503 raft_consensus.cc:2804] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:38.002957 501 ts_tablet_manager.cc:1434] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.003003 486 heartbeater.cc:499] Master 127.30.194.254:33731 was elected leader, sending a full tablet report...
I20251025 14:08:38.002993 503 raft_consensus.cc:697] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 1 LEADER]: Becoming Leader. State: Replica: 0839f45abe0040dbb6fac12abb50e59f, State: Running, Role: LEADER
I20251025 14:08:38.003089 503 consensus_queue.cc:237] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.003285 504 tablet_replica.cc:442] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "0839f45abe0040dbb6fac12abb50e59f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } } }
I20251025 14:08:38.003356 504 tablet_replica.cc:445] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:38.003314 505 tablet_replica.cc:442] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: TxnStatusTablet state changed. Reason: New leader 0839f45abe0040dbb6fac12abb50e59f. Latest consensus state: current_term: 1 leader_uuid: "0839f45abe0040dbb6fac12abb50e59f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } } }
I20251025 14:08:38.003397 505 tablet_replica.cc:445] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:38.003532 507 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:38.003568 333 catalog_manager.cc:5649] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f reported cstate change: term changed from 0 to 1, leader changed from <none> to 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193). New cstate: current_term: 1 leader_uuid: "0839f45abe0040dbb6fac12abb50e59f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:38.003630 507 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:38.048007 31499 test_util.cc:276] Using random seed: 870220034
I20251025 14:08:38.052264 333 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:55186:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:38.053747 445 tablet_service.cc:1505] Processing CreateTablet for tablet e1377f87d9e241da818b105cb0bc4843 (DEFAULT_TABLE table=test-workload [id=f7a836a84adf45489b9bd63ecc3b1126]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:38.053747 444 tablet_service.cc:1505] Processing CreateTablet for tablet 812e99b6902d47bdb8a49e8f3d41d9b4 (DEFAULT_TABLE table=test-workload [id=f7a836a84adf45489b9bd63ecc3b1126]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:38.053906 444 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 812e99b6902d47bdb8a49e8f3d41d9b4. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:38.053970 445 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet e1377f87d9e241da818b105cb0bc4843. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:38.055270 501 tablet_bootstrap.cc:492] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Bootstrap starting.
I20251025 14:08:38.055680 501 tablet_bootstrap.cc:654] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:38.056274 501 tablet_bootstrap.cc:492] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: No bootstrap required, opened a new log
I20251025 14:08:38.056325 501 ts_tablet_manager.cc:1403] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.056484 501 raft_consensus.cc:359] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.056533 501 raft_consensus.cc:385] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:38.056546 501 raft_consensus.cc:740] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0839f45abe0040dbb6fac12abb50e59f, State: Initialized, Role: FOLLOWER
I20251025 14:08:38.056578 501 consensus_queue.cc:260] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.056617 501 raft_consensus.cc:399] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:38.056641 501 raft_consensus.cc:493] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:38.056671 501 raft_consensus.cc:3060] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:38.057302 501 raft_consensus.cc:515] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.057361 501 leader_election.cc:304] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 0839f45abe0040dbb6fac12abb50e59f; no voters:
I20251025 14:08:38.057394 501 leader_election.cc:290] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:38.057442 503 raft_consensus.cc:2804] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:38.057469 501 ts_tablet_manager.cc:1434] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.057498 503 raft_consensus.cc:697] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 1 LEADER]: Becoming Leader. State: Replica: 0839f45abe0040dbb6fac12abb50e59f, State: Running, Role: LEADER
I20251025 14:08:38.057528 501 tablet_bootstrap.cc:492] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Bootstrap starting.
I20251025 14:08:38.057539 503 consensus_queue.cc:237] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.057935 501 tablet_bootstrap.cc:654] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:38.058471 501 tablet_bootstrap.cc:492] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: No bootstrap required, opened a new log
I20251025 14:08:38.058475 333 catalog_manager.cc:5649] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f reported cstate change: term changed from 0 to 1, leader changed from <none> to 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193). New cstate: current_term: 1 leader_uuid: "0839f45abe0040dbb6fac12abb50e59f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:38.058562 501 ts_tablet_manager.cc:1403] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.058683 501 raft_consensus.cc:359] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.058732 501 raft_consensus.cc:385] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:38.058755 501 raft_consensus.cc:740] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0839f45abe0040dbb6fac12abb50e59f, State: Initialized, Role: FOLLOWER
I20251025 14:08:38.058800 501 consensus_queue.cc:260] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.058838 501 raft_consensus.cc:399] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:38.058858 501 raft_consensus.cc:493] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:38.058884 501 raft_consensus.cc:3060] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:38.059479 501 raft_consensus.cc:515] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.059535 501 leader_election.cc:304] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 0839f45abe0040dbb6fac12abb50e59f; no voters:
I20251025 14:08:38.059563 501 leader_election.cc:290] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:38.059640 504 raft_consensus.cc:2804] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:38.059644 501 ts_tablet_manager.cc:1434] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.059706 504 raft_consensus.cc:697] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 1 LEADER]: Becoming Leader. State: Replica: 0839f45abe0040dbb6fac12abb50e59f, State: Running, Role: LEADER
I20251025 14:08:38.059743 504 consensus_queue.cc:237] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } }
I20251025 14:08:38.060122 333 catalog_manager.cc:5649] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f reported cstate change: term changed from 0 to 1, leader changed from <none> to 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193). New cstate: current_term: 1 leader_uuid: "0839f45abe0040dbb6fac12abb50e59f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "0839f45abe0040dbb6fac12abb50e59f" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 40421 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:38.130503 333 catalog_manager.cc:2507] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:55182:
table { table_name: "test-workload" } modify_external_catalogs: true
I20251025 14:08:38.130596 333 catalog_manager.cc:2755] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:55182:
table { table_name: "test-workload" } modify_external_catalogs: true
I20251025 14:08:38.131217 333 catalog_manager.cc:5936] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: Sending DeleteTablet for 1 replicas of tablet e1377f87d9e241da818b105cb0bc4843
I20251025 14:08:38.131310 333 catalog_manager.cc:5936] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: Sending DeleteTablet for 1 replicas of tablet 812e99b6902d47bdb8a49e8f3d41d9b4
I20251025 14:08:38.131565 445 tablet_service.cc:1552] Processing DeleteTablet for tablet 812e99b6902d47bdb8a49e8f3d41d9b4 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:08:38 UTC) from {username='slave'} at 127.0.0.1:46464
I20251025 14:08:38.131577 444 tablet_service.cc:1552] Processing DeleteTablet for tablet e1377f87d9e241da818b105cb0bc4843 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:08:38 UTC) from {username='slave'} at 127.0.0.1:46464
I20251025 14:08:38.131796 543 tablet_replica.cc:333] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: stopping tablet replica
I20251025 14:08:38.131847 543 raft_consensus.cc:2243] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:38.131898 543 raft_consensus.cc:2272] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:38.132210 543 ts_tablet_manager.cc:1916] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Deleting tablet data with delete state TABLET_DATA_DELETED
W20251025 14:08:38.132596 477 meta_cache.cc:788] Not found: LookupRpcById { tablet: 'e1377f87d9e241da818b105cb0bc4843', attempt: 1 } failed
I20251025 14:08:38.132640 477 txn_status_manager.cc:244] Participant e1377f87d9e241da818b105cb0bc4843 of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: 'e1377f87d9e241da818b105cb0bc4843', attempt: 1 } failed
I20251025 14:08:38.132706 477 txn_status_manager.cc:244] Participant 812e99b6902d47bdb8a49e8f3d41d9b4 of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '812e99b6902d47bdb8a49e8f3d41d9b4', attempt: 1 } failed
I20251025 14:08:38.132725 477 txn_status_manager.cc:206] Scheduling write for ABORT_IN_PROGRESS for txn 0
I20251025 14:08:38.133464 477 txn_status_manager.cc:337] Participant e1377f87d9e241da818b105cb0bc4843 was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: 'e1377f87d9e241da818b105cb0bc4843', attempt: 1 } failed
I20251025 14:08:38.133538 477 txn_status_manager.cc:337] Participant 812e99b6902d47bdb8a49e8f3d41d9b4 was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: '812e99b6902d47bdb8a49e8f3d41d9b4', attempt: 1 } failed
I20251025 14:08:38.134085 543 ts_tablet_manager.cc:1929] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.101
I20251025 14:08:38.134138 543 log.cc:1199] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/812e99b6902d47bdb8a49e8f3d41d9b4
I20251025 14:08:38.134393 543 ts_tablet_manager.cc:1950] T 812e99b6902d47bdb8a49e8f3d41d9b4 P 0839f45abe0040dbb6fac12abb50e59f: Deleting consensus metadata
I20251025 14:08:38.134684 320 catalog_manager.cc:4985] TS 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193:40421): tablet 812e99b6902d47bdb8a49e8f3d41d9b4 (table test-workload [id=f7a836a84adf45489b9bd63ecc3b1126]) successfully deleted
I20251025 14:08:38.134685 543 tablet_replica.cc:333] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: stopping tablet replica
I20251025 14:08:38.134790 543 raft_consensus.cc:2243] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:38.134871 543 raft_consensus.cc:2272] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:38.135252 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:38.136051 543 ts_tablet_manager.cc:1916] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Deleting tablet data with delete state TABLET_DATA_DELETED
I20251025 14:08:38.137779 543 ts_tablet_manager.cc:1929] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.99
I20251025 14:08:38.137931 543 log.cc:1199] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDeletingParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/e1377f87d9e241da818b105cb0bc4843
I20251025 14:08:38.137991 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:38.138358 543 ts_tablet_manager.cc:1950] T e1377f87d9e241da818b105cb0bc4843 P 0839f45abe0040dbb6fac12abb50e59f: Deleting consensus metadata
W20251025 14:08:38.138677 320 catalog_manager.cc:4977] TS 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193:40421): delete failed for tablet e1377f87d9e241da818b105cb0bc4843 with error code TABLET_NOT_RUNNING: Service unavailable: Tablet Manager is not running: MANAGER_QUIESCING
I20251025 14:08:38.138846 31499 tablet_replica.cc:333] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f: stopping tablet replica
I20251025 14:08:38.138896 31499 raft_consensus.cc:2243] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:38.138928 31499 raft_consensus.cc:2272] T 383ba1220cc6495eb1e2299d9bf0e9b9 P 0839f45abe0040dbb6fac12abb50e59f [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:38.151103 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:38.152419 31499 master.cc:561] Master@127.30.194.254:33731 shutting down...
W20251025 14:08:38.180114 320 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.30.194.193:40421: connect: Connection refused (error 111)
W20251025 14:08:38.180577 320 catalog_manager.cc:4712] TS 0839f45abe0040dbb6fac12abb50e59f (127.30.194.193:40421): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet e1377f87d9e241da818b105cb0bc4843: Network error: Client connection negotiation failed: client connection to 127.30.194.193:40421: connect: Connection refused (error 111)
I20251025 14:08:38.181594 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:38.181653 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:38.181672 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P d367adc9a06042da82faff4f54ca0510: stopping tablet replica
I20251025 14:08:38.193737 31499 master.cc:583] Master@127.30.194.254:33731 shutdown complete.
[ OK ] TxnCommitITest.TestCommitAfterDeletingParticipant (1295 ms)
[ RUN ] TxnCommitITest.TestCommitAfterDroppingRangeParticipant
I20251025 14:08:38.197180 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:38967
I20251025 14:08:38.197338 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:38.198426 546 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:38.198463 547 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:38.198588 549 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:38.198757 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:38.198911 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:38.198969 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:38.198985 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401318198985 us; error 0 us; skew 500 ppm
I20251025 14:08:38.199590 31499 webserver.cc:492] Webserver started at http://127.30.194.254:39079/ using document root <none> and password file <none>
I20251025 14:08:38.199687 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:38.199743 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:38.199801 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:38.200078 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "f5367db8ed2b4b39be12ab4272ed4c2a"
format_stamp: "Formatted at 2025-10-25 14:08:38 on dist-test-slave-v4l2"
I20251025 14:08:38.200984 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.001s
I20251025 14:08:38.201596 554 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:38.201740 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:08:38.201781 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "f5367db8ed2b4b39be12ab4272ed4c2a"
format_stamp: "Formatted at 2025-10-25 14:08:38 on dist-test-slave-v4l2"
I20251025 14:08:38.201823 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:38.220878 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:38.221127 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:38.223877 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:38967
I20251025 14:08:38.225129 616 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:38967 every 8 connection(s)
I20251025 14:08:38.225301 617 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:38.226308 617 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: Bootstrap starting.
I20251025 14:08:38.226589 617 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:38.227108 617 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: No bootstrap required, opened a new log
I20251025 14:08:38.227242 617 raft_consensus.cc:359] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" member_type: VOTER }
I20251025 14:08:38.227308 617 raft_consensus.cc:385] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:38.227324 617 raft_consensus.cc:740] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f5367db8ed2b4b39be12ab4272ed4c2a, State: Initialized, Role: FOLLOWER
I20251025 14:08:38.227366 617 consensus_queue.cc:260] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" member_type: VOTER }
I20251025 14:08:38.227409 617 raft_consensus.cc:399] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:38.227429 617 raft_consensus.cc:493] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:38.227451 617 raft_consensus.cc:3060] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:38.227918 617 raft_consensus.cc:515] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" member_type: VOTER }
I20251025 14:08:38.227973 617 leader_election.cc:304] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f5367db8ed2b4b39be12ab4272ed4c2a; no voters:
I20251025 14:08:38.228068 617 leader_election.cc:290] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:38.228128 620 raft_consensus.cc:2804] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:38.228232 617 sys_catalog.cc:565] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:38.228286 620 raft_consensus.cc:697] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 1 LEADER]: Becoming Leader. State: Replica: f5367db8ed2b4b39be12ab4272ed4c2a, State: Running, Role: LEADER
I20251025 14:08:38.228343 620 consensus_queue.cc:237] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" member_type: VOTER }
I20251025 14:08:38.228582 621 sys_catalog.cc:455] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" member_type: VOTER } }
I20251025 14:08:38.228693 621 sys_catalog.cc:458] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:38.228600 622 sys_catalog.cc:455] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [sys.catalog]: SysCatalogTable state changed. Reason: New leader f5367db8ed2b4b39be12ab4272ed4c2a. Latest consensus state: current_term: 1 leader_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f5367db8ed2b4b39be12ab4272ed4c2a" member_type: VOTER } }
I20251025 14:08:38.228861 622 sys_catalog.cc:458] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:38.228904 626 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:38.229106 626 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:38.229669 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:38.229692 626 catalog_manager.cc:1357] Generated new cluster ID: ef872114ebe94fea81c6e7ba6422a439
I20251025 14:08:38.229727 626 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:38.250592 626 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:38.251016 626 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:38.259963 626 catalog_manager.cc:6022] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: Generated new TSK 0
I20251025 14:08:38.260037 626 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:38.261552 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:38.262903 647 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:38.263024 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:38.262914 644 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:38.262924 645 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:38.263236 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:38.263270 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:38.263283 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401318263283 us; error 0 us; skew 500 ppm
I20251025 14:08:38.263877 31499 webserver.cc:492] Webserver started at http://127.30.194.193:34269/ using document root <none> and password file <none>
I20251025 14:08:38.263943 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:38.263978 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:38.264014 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:38.264346 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "e98cf2c1faad475eac345e454e70411e"
format_stamp: "Formatted at 2025-10-25 14:08:38 on dist-test-slave-v4l2"
I20251025 14:08:38.265232 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:38.265628 652 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:38.265758 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:38.265808 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "e98cf2c1faad475eac345e454e70411e"
format_stamp: "Formatted at 2025-10-25 14:08:38 on dist-test-slave-v4l2"
I20251025 14:08:38.265849 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:38.267316 571 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53430:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:38.271296 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:38.271442 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:38.271720 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:38.271793 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:38.271839 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:38.271862 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:38.275104 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:41335
I20251025 14:08:38.275131 717 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:41335 every 8 connection(s)
I20251025 14:08:38.275540 718 heartbeater.cc:344] Connected to a master server at 127.30.194.254:38967
I20251025 14:08:38.275593 718 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:38.275703 718 heartbeater.cc:507] Master 127.30.194.254:38967 requested a full tablet report, sending...
I20251025 14:08:38.275915 571 ts_manager.cc:194] Registered new tserver with Master: e98cf2c1faad475eac345e454e70411e (127.30.194.193:41335)
I20251025 14:08:38.276400 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001107047s
I20251025 14:08:38.276648 571 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:53436
I20251025 14:08:39.272526 571 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53456:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:39.276401 682 tablet_service.cc:1505] Processing CreateTablet for tablet 833aa948e7ef4302bee56d15dbb05380 (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=1d424b8212f94fe99b60395b5c246f10]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:39.276520 682 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 833aa948e7ef4302bee56d15dbb05380. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:39.277204 718 heartbeater.cc:499] Master 127.30.194.254:38967 was elected leader, sending a full tablet report...
I20251025 14:08:39.277793 738 tablet_bootstrap.cc:492] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: Bootstrap starting.
I20251025 14:08:39.278082 738 tablet_bootstrap.cc:654] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:39.278600 738 tablet_bootstrap.cc:492] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: No bootstrap required, opened a new log
I20251025 14:08:39.278653 738 ts_tablet_manager.cc:1403] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:39.278787 738 raft_consensus.cc:359] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.278837 738 raft_consensus.cc:385] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:39.278867 738 raft_consensus.cc:740] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e98cf2c1faad475eac345e454e70411e, State: Initialized, Role: FOLLOWER
I20251025 14:08:39.278926 738 consensus_queue.cc:260] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.278973 738 raft_consensus.cc:399] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:39.278995 738 raft_consensus.cc:493] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:39.279026 738 raft_consensus.cc:3060] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:39.279575 738 raft_consensus.cc:515] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.279636 738 leader_election.cc:304] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e98cf2c1faad475eac345e454e70411e; no voters:
I20251025 14:08:39.279750 738 leader_election.cc:290] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:39.279788 740 raft_consensus.cc:2804] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:39.279898 738 ts_tablet_manager.cc:1434] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:39.279933 740 raft_consensus.cc:697] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 1 LEADER]: Becoming Leader. State: Replica: e98cf2c1faad475eac345e454e70411e, State: Running, Role: LEADER
I20251025 14:08:39.280042 740 consensus_queue.cc:237] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.280251 741 tablet_replica.cc:442] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "e98cf2c1faad475eac345e454e70411e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } } }
I20251025 14:08:39.280303 742 tablet_replica.cc:442] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: TxnStatusTablet state changed. Reason: New leader e98cf2c1faad475eac345e454e70411e. Latest consensus state: current_term: 1 leader_uuid: "e98cf2c1faad475eac345e454e70411e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } } }
I20251025 14:08:39.280403 741 tablet_replica.cc:445] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:39.280426 742 tablet_replica.cc:445] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:39.280486 571 catalog_manager.cc:5649] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e reported cstate change: term changed from 0 to 1, leader changed from <none> to e98cf2c1faad475eac345e454e70411e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "e98cf2c1faad475eac345e454e70411e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:39.280553 744 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:39.280583 744 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:39.310549 31499 test_util.cc:276] Using random seed: 871482576
I20251025 14:08:39.315460 571 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53482:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:39.316859 682 tablet_service.cc:1505] Processing CreateTablet for tablet 2540174fec5e4502836df2c90046c1ca (DEFAULT_TABLE table=test-workload [id=9c1da13d7c8d42a79b6abdac473641ce]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:39.316886 681 tablet_service.cc:1505] Processing CreateTablet for tablet 4fa79a19ea8c437bb8dd7344af995a31 (DEFAULT_TABLE table=test-workload [id=9c1da13d7c8d42a79b6abdac473641ce]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:39.316982 682 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2540174fec5e4502836df2c90046c1ca. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:39.317088 681 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 4fa79a19ea8c437bb8dd7344af995a31. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:39.318008 738 tablet_bootstrap.cc:492] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Bootstrap starting.
I20251025 14:08:39.318434 738 tablet_bootstrap.cc:654] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:39.319080 738 tablet_bootstrap.cc:492] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: No bootstrap required, opened a new log
I20251025 14:08:39.319126 738 ts_tablet_manager.cc:1403] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:39.319271 738 raft_consensus.cc:359] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.319321 738 raft_consensus.cc:385] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:39.319335 738 raft_consensus.cc:740] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e98cf2c1faad475eac345e454e70411e, State: Initialized, Role: FOLLOWER
I20251025 14:08:39.319370 738 consensus_queue.cc:260] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.319423 738 raft_consensus.cc:399] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:39.319445 738 raft_consensus.cc:493] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:39.319459 738 raft_consensus.cc:3060] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:39.319905 738 raft_consensus.cc:515] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.319974 738 leader_election.cc:304] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e98cf2c1faad475eac345e454e70411e; no voters:
I20251025 14:08:39.320008 738 leader_election.cc:290] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:39.320081 741 raft_consensus.cc:2804] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:39.320072 738 ts_tablet_manager.cc:1434] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:39.320184 741 raft_consensus.cc:697] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 1 LEADER]: Becoming Leader. State: Replica: e98cf2c1faad475eac345e454e70411e, State: Running, Role: LEADER
I20251025 14:08:39.320195 738 tablet_bootstrap.cc:492] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Bootstrap starting.
I20251025 14:08:39.320225 741 consensus_queue.cc:237] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.320596 738 tablet_bootstrap.cc:654] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:39.320899 571 catalog_manager.cc:5649] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e reported cstate change: term changed from 0 to 1, leader changed from <none> to e98cf2c1faad475eac345e454e70411e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "e98cf2c1faad475eac345e454e70411e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:39.321522 738 tablet_bootstrap.cc:492] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: No bootstrap required, opened a new log
I20251025 14:08:39.321579 738 ts_tablet_manager.cc:1403] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:39.321717 738 raft_consensus.cc:359] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.321776 738 raft_consensus.cc:385] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:39.321801 738 raft_consensus.cc:740] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e98cf2c1faad475eac345e454e70411e, State: Initialized, Role: FOLLOWER
I20251025 14:08:39.321837 738 consensus_queue.cc:260] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.321868 738 raft_consensus.cc:399] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:39.321885 738 raft_consensus.cc:493] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:39.321904 738 raft_consensus.cc:3060] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:39.322448 738 raft_consensus.cc:515] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.322492 738 leader_election.cc:304] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e98cf2c1faad475eac345e454e70411e; no voters:
I20251025 14:08:39.322525 738 leader_election.cc:290] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:39.322594 740 raft_consensus.cc:2804] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:39.322654 740 raft_consensus.cc:697] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 1 LEADER]: Becoming Leader. State: Replica: e98cf2c1faad475eac345e454e70411e, State: Running, Role: LEADER
I20251025 14:08:39.322688 740 consensus_queue.cc:237] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } }
I20251025 14:08:39.322603 738 ts_tablet_manager.cc:1434] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:39.323097 571 catalog_manager.cc:5649] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e reported cstate change: term changed from 0 to 1, leader changed from <none> to e98cf2c1faad475eac345e454e70411e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "e98cf2c1faad475eac345e454e70411e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e98cf2c1faad475eac345e454e70411e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 41335 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:39.397333 567 catalog_manager.cc:2507] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:53476:
table { table_name: "test-workload" } modify_external_catalogs: true
I20251025 14:08:39.397424 567 catalog_manager.cc:2755] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:53476:
table { table_name: "test-workload" } modify_external_catalogs: true
I20251025 14:08:39.398028 567 catalog_manager.cc:5936] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: Sending DeleteTablet for 1 replicas of tablet 2540174fec5e4502836df2c90046c1ca
I20251025 14:08:39.398097 567 catalog_manager.cc:5936] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: Sending DeleteTablet for 1 replicas of tablet 4fa79a19ea8c437bb8dd7344af995a31
I20251025 14:08:39.398278 681 tablet_service.cc:1552] Processing DeleteTablet for tablet 2540174fec5e4502836df2c90046c1ca with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:08:39 UTC) from {username='slave'} at 127.0.0.1:47594
I20251025 14:08:39.398279 682 tablet_service.cc:1552] Processing DeleteTablet for tablet 4fa79a19ea8c437bb8dd7344af995a31 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:08:39 UTC) from {username='slave'} at 127.0.0.1:47594
I20251025 14:08:39.398504 780 tablet_replica.cc:333] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: stopping tablet replica
I20251025 14:08:39.398571 780 raft_consensus.cc:2243] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:39.398624 780 raft_consensus.cc:2272] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:39.398957 780 ts_tablet_manager.cc:1916] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Deleting tablet data with delete state TABLET_DATA_DELETED
W20251025 14:08:39.399282 722 meta_cache.cc:788] Not found: LookupRpcById { tablet: '4fa79a19ea8c437bb8dd7344af995a31', attempt: 1 } failed
I20251025 14:08:39.399333 722 txn_status_manager.cc:244] Participant 4fa79a19ea8c437bb8dd7344af995a31 of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '4fa79a19ea8c437bb8dd7344af995a31', attempt: 1 } failed
I20251025 14:08:39.399394 722 txn_status_manager.cc:244] Participant 2540174fec5e4502836df2c90046c1ca of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '2540174fec5e4502836df2c90046c1ca', attempt: 1 } failed
I20251025 14:08:39.399420 722 txn_status_manager.cc:206] Scheduling write for ABORT_IN_PROGRESS for txn 0
I20251025 14:08:39.400120 722 txn_status_manager.cc:337] Participant 4fa79a19ea8c437bb8dd7344af995a31 was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: '4fa79a19ea8c437bb8dd7344af995a31', attempt: 1 } failed
I20251025 14:08:39.400193 722 txn_status_manager.cc:337] Participant 2540174fec5e4502836df2c90046c1ca was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: '2540174fec5e4502836df2c90046c1ca', attempt: 1 } failed
I20251025 14:08:39.400489 780 ts_tablet_manager.cc:1929] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.106
I20251025 14:08:39.400542 780 log.cc:1199] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/2540174fec5e4502836df2c90046c1ca
I20251025 14:08:39.400784 780 ts_tablet_manager.cc:1950] T 2540174fec5e4502836df2c90046c1ca P e98cf2c1faad475eac345e454e70411e: Deleting consensus metadata
I20251025 14:08:39.400982 780 tablet_replica.cc:333] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: stopping tablet replica
I20251025 14:08:39.401104 557 catalog_manager.cc:4985] TS e98cf2c1faad475eac345e454e70411e (127.30.194.193:41335): tablet 2540174fec5e4502836df2c90046c1ca (table test-workload [id=9c1da13d7c8d42a79b6abdac473641ce]) successfully deleted
I20251025 14:08:39.401109 780 raft_consensus.cc:2243] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:39.401207 780 raft_consensus.cc:2272] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:39.401486 780 ts_tablet_manager.cc:1916] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Deleting tablet data with delete state TABLET_DATA_DELETED
I20251025 14:08:39.402120 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:39.404435 780 ts_tablet_manager.cc:1929] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.106
I20251025 14:08:39.404518 780 log.cc:1199] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestCommitAfterDroppingRangeParticipant.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/4fa79a19ea8c437bb8dd7344af995a31
I20251025 14:08:39.404803 780 ts_tablet_manager.cc:1950] T 4fa79a19ea8c437bb8dd7344af995a31 P e98cf2c1faad475eac345e454e70411e: Deleting consensus metadata
I20251025 14:08:39.404800 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
W20251025 14:08:39.405138 557 catalog_manager.cc:4977] TS e98cf2c1faad475eac345e454e70411e (127.30.194.193:41335): delete failed for tablet 4fa79a19ea8c437bb8dd7344af995a31 with error code TABLET_NOT_RUNNING: Service unavailable: Tablet Manager is not running: MANAGER_QUIESCING
I20251025 14:08:39.405193 31499 tablet_replica.cc:333] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e: stopping tablet replica
I20251025 14:08:39.405244 31499 raft_consensus.cc:2243] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:39.405282 31499 raft_consensus.cc:2272] T 833aa948e7ef4302bee56d15dbb05380 P e98cf2c1faad475eac345e454e70411e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:39.417490 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:39.418723 31499 master.cc:561] Master@127.30.194.254:38967 shutting down...
W20251025 14:08:39.440936 557 catalog_manager.cc:4712] TS e98cf2c1faad475eac345e454e70411e (127.30.194.193:41335): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet 4fa79a19ea8c437bb8dd7344af995a31: Network error: Client connection negotiation failed: client connection to 127.30.194.193:41335: connect: Connection refused (error 111)
I20251025 14:08:39.448294 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:39.448396 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:39.448418 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P f5367db8ed2b4b39be12ab4272ed4c2a: stopping tablet replica
I20251025 14:08:39.460590 31499 master.cc:583] Master@127.30.194.254:38967 shutdown complete.
[ OK ] TxnCommitITest.TestCommitAfterDroppingRangeParticipant (1266 ms)
[ RUN ] TxnCommitITest.TestRestartingWhileCommitting
I20251025 14:08:39.464156 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:45321
I20251025 14:08:39.464308 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:39.465447 784 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:39.465490 786 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:39.465646 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:39.465567 783 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:39.465796 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:39.465837 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:39.465860 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401319465860 us; error 0 us; skew 500 ppm
I20251025 14:08:39.466396 31499 webserver.cc:492] Webserver started at http://127.30.194.254:45119/ using document root <none> and password file <none>
I20251025 14:08:39.466478 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:39.466526 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:39.466576 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:39.466868 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "8a84de90551c4dac8b32c22b23e1f4c5"
format_stamp: "Formatted at 2025-10-25 14:08:39 on dist-test-slave-v4l2"
I20251025 14:08:39.467775 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:39.468312 791 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:39.468490 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:39.468546 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "8a84de90551c4dac8b32c22b23e1f4c5"
format_stamp: "Formatted at 2025-10-25 14:08:39 on dist-test-slave-v4l2"
I20251025 14:08:39.468593 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:39.495728 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:39.495932 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:39.498714 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:45321
I20251025 14:08:39.500193 853 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:45321 every 8 connection(s)
I20251025 14:08:39.500332 854 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:39.501341 854 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5: Bootstrap starting.
I20251025 14:08:39.501605 854 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:39.502041 854 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5: No bootstrap required, opened a new log
I20251025 14:08:39.502153 854 raft_consensus.cc:359] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" member_type: VOTER }
I20251025 14:08:39.502202 854 raft_consensus.cc:385] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:39.502224 854 raft_consensus.cc:740] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8a84de90551c4dac8b32c22b23e1f4c5, State: Initialized, Role: FOLLOWER
I20251025 14:08:39.502278 854 consensus_queue.cc:260] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" member_type: VOTER }
I20251025 14:08:39.502321 854 raft_consensus.cc:399] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:39.502349 854 raft_consensus.cc:493] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:39.502380 854 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:39.502904 854 raft_consensus.cc:515] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" member_type: VOTER }
I20251025 14:08:39.502964 854 leader_election.cc:304] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8a84de90551c4dac8b32c22b23e1f4c5; no voters:
I20251025 14:08:39.503060 854 leader_election.cc:290] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:39.503099 857 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:39.503193 854 sys_catalog.cc:565] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:08:39.503218 857 raft_consensus.cc:697] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 1 LEADER]: Becoming Leader. State: Replica: 8a84de90551c4dac8b32c22b23e1f4c5, State: Running, Role: LEADER
I20251025 14:08:39.503353 857 consensus_queue.cc:237] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" member_type: VOTER }
I20251025 14:08:39.503574 859 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 8a84de90551c4dac8b32c22b23e1f4c5. Latest consensus state: current_term: 1 leader_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" member_type: VOTER } }
I20251025 14:08:39.503575 858 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8a84de90551c4dac8b32c22b23e1f4c5" member_type: VOTER } }
I20251025 14:08:39.503685 859 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:39.503768 858 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [sys.catalog]: This master's current role is: LEADER
I20251025 14:08:39.503968 865 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:08:39.504106 865 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:08:39.504595 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:08:39.504663 865 catalog_manager.cc:1357] Generated new cluster ID: a98512b6f4c14447ac3b1088e3594be1
I20251025 14:08:39.504714 865 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:08:39.512327 865 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:08:39.512755 865 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:08:39.518221 865 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5: Generated new TSK 0
I20251025 14:08:39.518313 865 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:08:39.520216 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:39.521430 884 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:39.521413 882 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:39.521433 881 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:39.521661 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:39.521756 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:39.521790 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:39.521801 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401319521802 us; error 0 us; skew 500 ppm
I20251025 14:08:39.522315 31499 webserver.cc:492] Webserver started at http://127.30.194.193:36873/ using document root <none> and password file <none>
I20251025 14:08:39.522390 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:39.522435 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:08:39.522496 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:08:39.522742 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "b13151cc93314ac9994132675db23ba7"
format_stamp: "Formatted at 2025-10-25 14:08:39 on dist-test-slave-v4l2"
I20251025 14:08:39.523588 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:39.523998 889 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:39.524130 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:39.524171 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "b13151cc93314ac9994132675db23ba7"
format_stamp: "Formatted at 2025-10-25 14:08:39 on dist-test-slave-v4l2"
I20251025 14:08:39.524221 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:39.527014 808 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60426:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:39.533735 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:39.533922 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:39.534250 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:08:39.534291 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:39.534323 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:08:39.534346 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:39.537621 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:37113
I20251025 14:08:39.537907 954 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:37113 every 8 connection(s)
I20251025 14:08:39.538056 955 heartbeater.cc:344] Connected to a master server at 127.30.194.254:45321
I20251025 14:08:39.538116 955 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:39.538213 955 heartbeater.cc:507] Master 127.30.194.254:45321 requested a full tablet report, sending...
I20251025 14:08:39.538429 808 ts_manager.cc:194] Registered new tserver with Master: b13151cc93314ac9994132675db23ba7 (127.30.194.193:37113)
I20251025 14:08:39.538944 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001119069s
I20251025 14:08:39.539135 808 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:60440
I20251025 14:08:40.532296 808 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60456:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:08:40.536468 919 tablet_service.cc:1505] Processing CreateTablet for tablet 47bcb429014749a081956d281ce2c3b6 (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=608b5054150b45948189524a7eaeb3c6]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:08:40.536602 919 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 47bcb429014749a081956d281ce2c3b6. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:40.537803 975 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:40.538106 975 tablet_bootstrap.cc:654] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:40.538559 975 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: No bootstrap required, opened a new log
I20251025 14:08:40.538595 975 ts_tablet_manager.cc:1403] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:40.538726 975 raft_consensus.cc:359] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.538770 975 raft_consensus.cc:385] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:40.538784 975 raft_consensus.cc:740] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:40.538821 975 consensus_queue.cc:260] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.538867 975 raft_consensus.cc:399] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:40.538883 975 raft_consensus.cc:493] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:40.538902 975 raft_consensus.cc:3060] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:40.539407 975 raft_consensus.cc:515] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.539484 975 leader_election.cc:304] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:40.539606 975 leader_election.cc:290] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:40.539639 977 raft_consensus.cc:2804] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:40.539613 955 heartbeater.cc:499] Master 127.30.194.254:45321 was elected leader, sending a full tablet report...
I20251025 14:08:40.539795 977 raft_consensus.cc:697] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:40.539831 975 ts_tablet_manager.cc:1434] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:40.539841 977 consensus_queue.cc:237] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.540148 978 tablet_replica.cc:442] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } } }
I20251025 14:08:40.540172 979 tablet_replica.cc:442] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: TxnStatusTablet state changed. Reason: New leader b13151cc93314ac9994132675db23ba7. Latest consensus state: current_term: 1 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } } }
I20251025 14:08:40.540307 978 tablet_replica.cc:445] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:40.540318 808 catalog_manager.cc:5649] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 0 to 1, leader changed from <none> to b13151cc93314ac9994132675db23ba7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:40.540409 979 tablet_replica.cc:445] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:40.540649 982 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:40.540691 982 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:40.572604 31499 test_util.cc:276] Using random seed: 872744634
I20251025 14:08:40.576838 808 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60480:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:08:40.578271 919 tablet_service.cc:1505] Processing CreateTablet for tablet 9e3e5c3fe0464c7da45fd76b15b27e18 (DEFAULT_TABLE table=test-workload [id=c82a626cecf4431889646f042db531a4]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:40.578300 918 tablet_service.cc:1505] Processing CreateTablet for tablet 86a059ba53f645fc9c1884eb8dac8e64 (DEFAULT_TABLE table=test-workload [id=c82a626cecf4431889646f042db531a4]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:08:40.578379 919 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9e3e5c3fe0464c7da45fd76b15b27e18. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:40.578482 918 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 86a059ba53f645fc9c1884eb8dac8e64. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:08:40.579337 975 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:40.579774 975 tablet_bootstrap.cc:654] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:40.580346 975 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: No bootstrap required, opened a new log
I20251025 14:08:40.580400 975 ts_tablet_manager.cc:1403] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:40.580549 975 raft_consensus.cc:359] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.580595 975 raft_consensus.cc:385] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:40.580619 975 raft_consensus.cc:740] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:40.580672 975 consensus_queue.cc:260] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.580711 975 raft_consensus.cc:399] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:40.580734 975 raft_consensus.cc:493] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:40.580765 975 raft_consensus.cc:3060] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:40.581308 975 raft_consensus.cc:515] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.581377 975 leader_election.cc:304] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:40.581424 975 leader_election.cc:290] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:40.581496 978 raft_consensus.cc:2804] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:40.581519 975 ts_tablet_manager.cc:1434] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:40.581576 978 raft_consensus.cc:697] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:40.581624 978 consensus_queue.cc:237] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.581622 975 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:40.582058 975 tablet_bootstrap.cc:654] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Neither blocks nor log segments found. Creating new log.
I20251025 14:08:40.582064 808 catalog_manager.cc:5649] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 0 to 1, leader changed from <none> to b13151cc93314ac9994132675db23ba7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:40.582885 975 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: No bootstrap required, opened a new log
I20251025 14:08:40.582930 975 ts_tablet_manager.cc:1403] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:08:40.583071 975 raft_consensus.cc:359] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.583113 975 raft_consensus.cc:385] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:08:40.583138 975 raft_consensus.cc:740] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:40.583190 975 consensus_queue.cc:260] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.583236 975 raft_consensus.cc:399] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:40.583259 975 raft_consensus.cc:493] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:40.583288 975 raft_consensus.cc:3060] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:08:40.584445 975 raft_consensus.cc:515] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.584561 975 leader_election.cc:304] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:40.584632 978 raft_consensus.cc:2804] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:08:40.584676 978 raft_consensus.cc:697] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:40.584717 978 consensus_queue.cc:237] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:40.584894 975 leader_election.cc:290] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:08:40.585016 975 ts_tablet_manager.cc:1434] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.002s user 0.002s sys 0.000s
I20251025 14:08:40.585134 808 catalog_manager.cc:5649] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 0 to 1, leader changed from <none> to b13151cc93314ac9994132675db23ba7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:40.658546 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:08:40.661331 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:08:40.661578 31499 tablet_replica.cc:333] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: stopping tablet replica
I20251025 14:08:40.661639 31499 raft_consensus.cc:2243] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:40.661690 31499 raft_consensus.cc:2272] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:40.662088 31499 tablet_replica.cc:333] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: stopping tablet replica
I20251025 14:08:40.662138 31499 raft_consensus.cc:2243] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:40.662164 31499 raft_consensus.cc:2272] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:40.712568 31499 txn_status_manager.cc:765] Waiting for 1 task(s) to stop
W20251025 14:08:40.873884 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 5) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:40.873965 838 rpcz_store.cc:269] 1025 14:08:40.792906 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:40.792943 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:40.873865 (+ 80922us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:40.948124 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 6) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:40.948215 837 rpcz_store.cc:269] 1025 14:08:40.869253 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:40.869306 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:40.948110 (+ 78804us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.032181 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 7) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.032265 838 rpcz_store.cc:269] 1025 14:08:40.946507 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:40.946545 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:41.032168 (+ 85623us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.106838 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 8) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.106918 837 rpcz_store.cc:269] 1025 14:08:41.022913 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.022961 (+ 48us) service_pool.cc:225] Handling call
1025 14:08:41.106827 (+ 83866us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.178174 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 9) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.178257 838 rpcz_store.cc:269] 1025 14:08:41.099968 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.100006 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:41.178150 (+ 78144us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.265247 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 10) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.265354 837 rpcz_store.cc:269] 1025 14:08:41.176557 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.176619 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:41.265232 (+ 88613us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.336374 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 11) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.336462 838 rpcz_store.cc:269] 1025 14:08:41.252895 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.252959 (+ 64us) service_pool.cc:225] Handling call
1025 14:08:41.336362 (+ 83403us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.406179 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 12) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.406327 837 rpcz_store.cc:269] 1025 14:08:41.330079 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.330120 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:41.406153 (+ 76033us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.489094 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 13) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.489207 837 rpcz_store.cc:269] 1025 14:08:41.407923 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.407990 (+ 67us) service_pool.cc:225] Handling call
1025 14:08:41.489079 (+ 81089us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.567534 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 14) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.567610 838 rpcz_store.cc:269] 1025 14:08:41.484363 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.484427 (+ 64us) service_pool.cc:225] Handling call
1025 14:08:41.567523 (+ 83096us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.639257 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 15) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.639348 837 rpcz_store.cc:269] 1025 14:08:41.560779 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.560815 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:41.639245 (+ 78430us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.716972 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 16) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.717095 838 rpcz_store.cc:269] 1025 14:08:41.637418 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.637491 (+ 73us) service_pool.cc:225] Handling call
1025 14:08:41.716959 (+ 79468us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.790992 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 17) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.791090 837 rpcz_store.cc:269] 1025 14:08:41.714040 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.714101 (+ 61us) service_pool.cc:225] Handling call
1025 14:08:41.790979 (+ 76878us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.869110 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 18) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.869175 838 rpcz_store.cc:269] 1025 14:08:41.790842 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.790913 (+ 71us) service_pool.cc:225] Handling call
1025 14:08:41.869101 (+ 78188us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:41.949602 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 19) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:41.949682 837 rpcz_store.cc:269] 1025 14:08:41.867615 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.867648 (+ 33us) service_pool.cc:225] Handling call
1025 14:08:41.949590 (+ 81942us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.023991 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 20) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.024066 838 rpcz_store.cc:269] 1025 14:08:41.944761 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:41.944804 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:42.023980 (+ 79176us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.104625 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 21) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.104728 837 rpcz_store.cc:269] 1025 14:08:42.021298 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.021336 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:42.104611 (+ 83275us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.182389 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 22) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.182475 838 rpcz_store.cc:269] 1025 14:08:42.097695 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.097736 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:42.182375 (+ 84639us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.250150 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 23) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.250234 837 rpcz_store.cc:269] 1025 14:08:42.174424 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.174465 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:42.250138 (+ 75673us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.330679 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 24) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.330761 837 rpcz_store.cc:269] 1025 14:08:42.251656 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.251692 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:42.330667 (+ 78975us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.402907 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 25) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.402989 838 rpcz_store.cc:269] 1025 14:08:42.328103 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.328148 (+ 45us) service_pool.cc:225] Handling call
1025 14:08:42.402895 (+ 74747us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.486497 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 26) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.486588 838 rpcz_store.cc:269] 1025 14:08:42.404398 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.404453 (+ 55us) service_pool.cc:225] Handling call
1025 14:08:42.486472 (+ 82019us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.561090 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 27) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.561172 837 rpcz_store.cc:269] 1025 14:08:42.480835 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.480903 (+ 68us) service_pool.cc:225] Handling call
1025 14:08:42.561078 (+ 80175us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.634063 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 28) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.634151 838 rpcz_store.cc:269] 1025 14:08:42.557158 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.557217 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:42.634052 (+ 76835us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.709299 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 29) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.709385 837 rpcz_store.cc:269] 1025 14:08:42.633913 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.633962 (+ 49us) service_pool.cc:225] Handling call
1025 14:08:42.709286 (+ 75324us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.789780 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 30) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.789870 837 rpcz_store.cc:269] 1025 14:08:42.710798 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.710853 (+ 55us) service_pool.cc:225] Handling call
1025 14:08:42.789767 (+ 78914us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.866067 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 31) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.866143 838 rpcz_store.cc:269] 1025 14:08:42.787241 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.787306 (+ 65us) service_pool.cc:225] Handling call
1025 14:08:42.866056 (+ 78750us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:42.946766 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 32) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:42.946892 837 rpcz_store.cc:269] 1025 14:08:42.864544 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.864604 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:42.946743 (+ 82139us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.023777 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 33) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.023862 838 rpcz_store.cc:269] 1025 14:08:42.940857 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:42.940894 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:43.023759 (+ 82865us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.099282 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 34) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.099373 837 rpcz_store.cc:269] 1025 14:08:43.017751 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.017822 (+ 71us) service_pool.cc:225] Handling call
1025 14:08:43.099269 (+ 81447us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.184504 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 35) took 89 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.184587 838 rpcz_store.cc:269] 1025 14:08:43.094610 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.094672 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:43.184490 (+ 89818us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.258646 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 36) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.258742 837 rpcz_store.cc:269] 1025 14:08:43.171004 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.171061 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:43.258630 (+ 87569us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.329447 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 37) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.329530 838 rpcz_store.cc:269] 1025 14:08:43.248195 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.248243 (+ 48us) service_pool.cc:225] Handling call
1025 14:08:43.329427 (+ 81184us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.402977 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 38) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.403054 837 rpcz_store.cc:269] 1025 14:08:43.324493 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.324556 (+ 63us) service_pool.cc:225] Handling call
1025 14:08:43.402965 (+ 78409us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.481225 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 39) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.481309 838 rpcz_store.cc:269] 1025 14:08:43.401148 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.401221 (+ 73us) service_pool.cc:225] Handling call
1025 14:08:43.481214 (+ 79993us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.558630 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 40) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.558708 837 rpcz_store.cc:269] 1025 14:08:43.477692 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.477731 (+ 39us) service_pool.cc:225] Handling call
1025 14:08:43.558617 (+ 80886us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.634992 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 41) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.635073 838 rpcz_store.cc:269] 1025 14:08:43.554075 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.554131 (+ 56us) service_pool.cc:225] Handling call
1025 14:08:43.634979 (+ 80848us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.706594 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 42) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.706673 837 rpcz_store.cc:269] 1025 14:08:43.630413 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.630476 (+ 63us) service_pool.cc:225] Handling call
1025 14:08:43.706579 (+ 76103us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.783025 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 43) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.783103 838 rpcz_store.cc:269] 1025 14:08:43.706688 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.706745 (+ 57us) service_pool.cc:225] Handling call
1025 14:08:43.783012 (+ 76267us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.861231 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 44) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.861304 838 rpcz_store.cc:269] 1025 14:08:43.783284 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.783321 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:43.861220 (+ 77899us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:43.941913 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 45) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:43.941992 837 rpcz_store.cc:269] 1025 14:08:43.859689 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.859736 (+ 47us) service_pool.cc:225] Handling call
1025 14:08:43.941900 (+ 82164us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.016206 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 46) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.016278 838 rpcz_store.cc:269] 1025 14:08:43.935987 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:43.936047 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:44.016195 (+ 80148us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.098347 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 47) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.098448 837 rpcz_store.cc:269] 1025 14:08:44.012677 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.012745 (+ 68us) service_pool.cc:225] Handling call
1025 14:08:44.098328 (+ 85583us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.169575 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 48) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.169661 838 rpcz_store.cc:269] 1025 14:08:44.089058 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.089111 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:44.169562 (+ 80451us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.247254 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 49) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.247332 837 rpcz_store.cc:269] 1025 14:08:44.165967 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.166026 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:44.247241 (+ 81215us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.323582 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 50) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.323675 838 rpcz_store.cc:269] 1025 14:08:44.242345 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.242412 (+ 67us) service_pool.cc:225] Handling call
1025 14:08:44.323566 (+ 81154us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.404186 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 51) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.404280 837 rpcz_store.cc:269] 1025 14:08:44.319017 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.319057 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:44.404171 (+ 85114us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.478067 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 52) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.478140 838 rpcz_store.cc:269] 1025 14:08:44.395330 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.395394 (+ 64us) service_pool.cc:225] Handling call
1025 14:08:44.478055 (+ 82661us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.547623 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 53) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.547705 837 rpcz_store.cc:269] 1025 14:08:44.471919 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.471989 (+ 70us) service_pool.cc:225] Handling call
1025 14:08:44.547610 (+ 75621us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.625769 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 54) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.625856 837 rpcz_store.cc:269] 1025 14:08:44.549135 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.549193 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:44.625754 (+ 76561us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.701344 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 55) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.701421 838 rpcz_store.cc:269] 1025 14:08:44.625556 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.625625 (+ 69us) service_pool.cc:225] Handling call
1025 14:08:44.701328 (+ 75703us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.778546 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 56) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.778632 838 rpcz_store.cc:269] 1025 14:08:44.702810 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.702840 (+ 30us) service_pool.cc:225] Handling call
1025 14:08:44.778532 (+ 75692us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.867692 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 57) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.867777 838 rpcz_store.cc:269] 1025 14:08:44.780080 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.780106 (+ 26us) service_pool.cc:225] Handling call
1025 14:08:44.867679 (+ 87573us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:44.939311 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 58) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:44.939391 837 rpcz_store.cc:269] 1025 14:08:44.856425 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.856487 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:44.939298 (+ 82811us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.010324 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 59) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.010408 838 rpcz_store.cc:269] 1025 14:08:44.933502 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:44.933556 (+ 54us) service_pool.cc:225] Handling call
1025 14:08:45.010309 (+ 76753us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.091958 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 60) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.092036 837 rpcz_store.cc:269] 1025 14:08:45.010107 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.010189 (+ 82us) service_pool.cc:225] Handling call
1025 14:08:45.091945 (+ 81756us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.173840 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 61) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.173918 838 rpcz_store.cc:269] 1025 14:08:45.087054 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.087124 (+ 70us) service_pool.cc:225] Handling call
1025 14:08:45.173829 (+ 86705us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.238262 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 62) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.238350 837 rpcz_store.cc:269] 1025 14:08:45.163663 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.163725 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:45.238249 (+ 74524us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.325608 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 63) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.325687 837 rpcz_store.cc:269] 1025 14:08:45.239885 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.239943 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:45.325594 (+ 85651us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.390697 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 64) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.390782 838 rpcz_store.cc:269] 1025 14:08:45.316403 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.316485 (+ 82us) service_pool.cc:225] Handling call
1025 14:08:45.390682 (+ 74197us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.471285 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 65) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.471349 838 rpcz_store.cc:269] 1025 14:08:45.392190 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.392232 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:45.471272 (+ 79040us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.555369 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 66) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.555446 837 rpcz_store.cc:269] 1025 14:08:45.468570 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.468638 (+ 68us) service_pool.cc:225] Handling call
1025 14:08:45.555357 (+ 86719us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.631266 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 67) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.631357 838 rpcz_store.cc:269] 1025 14:08:45.544888 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.544930 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:45.631251 (+ 86321us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.703651 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 68) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.703729 837 rpcz_store.cc:269] 1025 14:08:45.622097 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.622148 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:45.703638 (+ 81490us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:08:45.722766 31499 tablet_replica.cc:333] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: stopping tablet replica
I20251025 14:08:45.722854 31499 raft_consensus.cc:2243] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:08:45.723172 31499 raft_consensus.cc:2272] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Raft consensus is shut down!
W20251025 14:08:45.732038 965 meta_cache.cc:302] tablet 47bcb429014749a081956d281ce2c3b6: replica b13151cc93314ac9994132675db23ba7 (127.30.194.193:37113) has failed: Network error: Client connection negotiation failed: client connection to 127.30.194.193:37113: connect: Connection refused (error 111)
I20251025 14:08:45.735481 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:08:45.736653 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:45.737876 1021 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:45.737912 1018 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:45.738063 31499 server_base.cc:1047] running on GCE node
W20251025 14:08:45.737963 1019 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:45.738250 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:45.738301 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:45.738323 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401325738323 us; error 0 us; skew 500 ppm
I20251025 14:08:45.738943 31499 webserver.cc:492] Webserver started at http://127.30.194.193:36873/ using document root <none> and password file <none>
I20251025 14:08:45.739037 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:45.739075 31499 fs_manager.cc:365] Using existing metadata directory in first data directory
I20251025 14:08:45.739634 31499 fs_manager.cc:714] Time spent opening directory manager: real 0.000s user 0.000s sys 0.001s
I20251025 14:08:45.740181 1026 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:45.740321 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.001s
I20251025 14:08:45.740365 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "b13151cc93314ac9994132675db23ba7"
format_stamp: "Formatted at 2025-10-25 14:08:39 on dist-test-slave-v4l2"
I20251025 14:08:45.740418 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:45.746325 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:45.746500 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:45.746927 1034 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20251025 14:08:45.748507 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20251025 14:08:45.748556 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.002s user 0.000s sys 0.000s
I20251025 14:08:45.748592 31499 ts_tablet_manager.cc:600] Registering tablets (0/3 complete)
I20251025 14:08:45.749382 1034 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:45.749732 31499 ts_tablet_manager.cc:616] Registered 3 tablets
I20251025 14:08:45.749773 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s user 0.000s sys 0.001s
I20251025 14:08:45.753146 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:37113
I20251025 14:08:45.753216 1094 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:37113 every 8 connection(s)
I20251025 14:08:45.754537 1095 heartbeater.cc:344] Connected to a master server at 127.30.194.254:45321
I20251025 14:08:45.754662 1095 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:45.755952 1095 heartbeater.cc:507] Master 127.30.194.254:45321 requested a full tablet report, sending...
I20251025 14:08:45.756237 802 ts_manager.cc:194] Re-registered known tserver with Master: b13151cc93314ac9994132675db23ba7 (127.30.194.193:37113)
I20251025 14:08:45.756791 802 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:53624
I20251025 14:08:45.763826 1034 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap replayed 1/1 log segments. Stats: ops{read=95 overwritten=0 applied=95 ignored=0} inserts{seen=2271 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:45.764065 1034 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap complete.
I20251025 14:08:45.764186 1034 ts_tablet_manager.cc:1403] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.015s user 0.011s sys 0.000s
I20251025 14:08:45.764312 1034 raft_consensus.cc:359] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.764396 1034 raft_consensus.cc:740] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:45.764452 1034 consensus_queue.cc:260] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 95, Last appended: 1.95, Last appended by leader: 95, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.764494 1034 raft_consensus.cc:399] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:45.764516 1034 raft_consensus.cc:493] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:45.764549 1034 raft_consensus.cc:3060] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:08:45.765103 1034 raft_consensus.cc:515] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.765161 1034 leader_election.cc:304] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:45.765271 1034 leader_election.cc:290] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:08:45.765321 1103 raft_consensus.cc:2804] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:08:45.765427 1095 heartbeater.cc:499] Master 127.30.194.254:45321 was elected leader, sending a full tablet report...
I20251025 14:08:45.765394 1034 ts_tablet_manager.cc:1434] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.000s sys 0.004s
I20251025 14:08:45.765465 1103 raft_consensus.cc:697] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:45.765527 1034 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:45.765537 1103 consensus_queue.cc:237] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 95, Committed index: 95, Last appended: 1.95, Last appended by leader: 95, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.765974 802 catalog_manager.cc:5649] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:45.766762 1034 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=3 ignored=0} mutations{seen=1 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:45.766995 1034 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap complete.
I20251025 14:08:45.767103 1034 ts_tablet_manager.cc:1403] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.002s user 0.000s sys 0.000s
I20251025 14:08:45.767205 1034 raft_consensus.cc:359] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.767262 1034 raft_consensus.cc:740] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:45.767321 1034 consensus_queue.cc:260] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 1.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.767367 1034 raft_consensus.cc:399] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:45.767396 1034 raft_consensus.cc:493] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:45.767424 1034 raft_consensus.cc:3060] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:08:45.767879 1034 raft_consensus.cc:515] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.767932 1034 leader_election.cc:304] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:45.767978 1034 leader_election.cc:290] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:08:45.768011 1103 raft_consensus.cc:2804] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:08:45.768051 1034 ts_tablet_manager.cc:1434] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:45.768101 1034 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:45.768137 1103 raft_consensus.cc:697] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:45.768182 1103 consensus_queue.cc:237] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 1.5, Last appended by leader: 5, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.768348 1104 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } } }
I20251025 14:08:45.768399 1104 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:45.768529 1114 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:45.768566 1114 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:45.768579 1112 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: New leader b13151cc93314ac9994132675db23ba7. Latest consensus state: current_term: 2 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } } }
I20251025 14:08:45.768622 1112 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:45.768720 1114 txn_status_manager.cc:716] Starting 1 commit tasks
I20251025 14:08:45.768718 802 catalog_manager.cc:5649] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:45.774353 31499 tablet_server.cc:178] TabletServer@127.30.194.193:37113 shutting down...
I20251025 14:08:45.776801 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
W20251025 14:08:45.779019 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 69) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.779084 838 rpcz_store.cc:269] 1025 14:08:45.699062 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.699113 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:45.779012 (+ 79899us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:08:45.780654 1034 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap replayed 1/1 log segments. Stats: ops{read=99 overwritten=0 applied=99 ignored=0} inserts{seen=2239 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:45.780864 1034 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap complete.
I20251025 14:08:45.780957 1034 ts_tablet_manager.cc:1403] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.013s user 0.008s sys 0.000s
I20251025 14:08:45.781113 1034 raft_consensus.cc:359] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.781162 1034 raft_consensus.cc:740] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:45.781203 1034 consensus_queue.cc:260] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 99, Last appended: 1.99, Last appended by leader: 99, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.781246 1034 raft_consensus.cc:399] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:45.781267 1034 raft_consensus.cc:493] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:45.781307 1034 raft_consensus.cc:3060] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:08:45.781798 1034 raft_consensus.cc:515] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.781852 1034 leader_election.cc:304] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:45.781899 1034 leader_election.cc:290] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:08:45.781934 1112 raft_consensus.cc:2804] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:08:45.781962 1034 ts_tablet_manager.cc:1434] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.004s sys 0.000s
I20251025 14:08:45.782011 1112 raft_consensus.cc:697] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:45.782135 1112 consensus_queue.cc:237] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 99, Committed index: 99, Last appended: 1.99, Last appended by leader: 99, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:45.782466 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:08:45.782529 31499 raft_consensus.cc:2243] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:08:45.782577 31499 raft_consensus.cc:2272] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:08:45.782778 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:08:45.782824 31499 raft_consensus.cc:2243] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:08:45.782848 31499 raft_consensus.cc:2272] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Raft consensus is shut down!
W20251025 14:08:45.783056 31499 mvcc.cc:118] aborting op with timestamp 7214699830361747456 in state 0; MVCC is closed
I20251025 14:08:45.783105 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:08:45.783144 31499 raft_consensus.cc:2243] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:08:45.783177 31499 raft_consensus.cc:2272] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Raft consensus is shut down!
W20251025 14:08:45.860813 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 71) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.860893 837 rpcz_store.cc:269] 1025 14:08:45.775544 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.775642 (+ 98us) service_pool.cc:225] Handling call
1025 14:08:45.860801 (+ 85159us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:45.940982 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 72) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:45.941082 838 rpcz_store.cc:269] 1025 14:08:45.852660 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.852703 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:45.940967 (+ 88264us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.018031 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 73) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.018113 837 rpcz_store.cc:269] 1025 14:08:45.929643 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:45.929685 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:46.018017 (+ 88332us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.086337 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 74) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.086452 838 rpcz_store.cc:269] 1025 14:08:46.006734 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.006775 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:46.086325 (+ 79550us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.163623 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 75) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.163712 837 rpcz_store.cc:269] 1025 14:08:46.083819 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.083865 (+ 46us) service_pool.cc:225] Handling call
1025 14:08:46.163609 (+ 79744us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.246788 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 76) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.246868 838 rpcz_store.cc:269] 1025 14:08:46.161056 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.161100 (+ 44us) service_pool.cc:225] Handling call
1025 14:08:46.246775 (+ 85675us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.314241 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 77) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.314322 837 rpcz_store.cc:269] 1025 14:08:46.237453 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.237493 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:46.314228 (+ 76735us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.389432 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 78) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.389524 837 rpcz_store.cc:269] 1025 14:08:46.314549 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.314584 (+ 35us) service_pool.cc:225] Handling call
1025 14:08:46.389415 (+ 74831us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.465622 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 79) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.465710 837 rpcz_store.cc:269] 1025 14:08:46.390972 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.391005 (+ 33us) service_pool.cc:225] Handling call
1025 14:08:46.465608 (+ 74603us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.541816 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 80) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.541901 837 rpcz_store.cc:269] 1025 14:08:46.467105 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.467175 (+ 70us) service_pool.cc:225] Handling call
1025 14:08:46.541804 (+ 74629us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.622336 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 81) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.622424 837 rpcz_store.cc:269] 1025 14:08:46.543374 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.543411 (+ 37us) service_pool.cc:225] Handling call
1025 14:08:46.622324 (+ 78913us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.705288 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 82) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.705364 838 rpcz_store.cc:269] 1025 14:08:46.619822 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.619867 (+ 45us) service_pool.cc:225] Handling call
1025 14:08:46.705274 (+ 85407us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.780962 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 83) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.781082 837 rpcz_store.cc:269] 1025 14:08:46.696028 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.696062 (+ 34us) service_pool.cc:225] Handling call
1025 14:08:46.780950 (+ 84888us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.853338 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 84) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.853425 838 rpcz_store.cc:269] 1025 14:08:46.772910 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.772952 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:46.853326 (+ 80374us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:46.934001 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 85) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:46.934089 837 rpcz_store.cc:269] 1025 14:08:46.849716 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.849758 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:46.933986 (+ 84228us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.009626 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 86) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.009708 838 rpcz_store.cc:269] 1025 14:08:46.926058 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:46.926098 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:47.009613 (+ 83515us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.086524 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 87) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.086611 837 rpcz_store.cc:269] 1025 14:08:47.002719 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.002755 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:47.086512 (+ 83757us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.158653 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 88) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.158730 838 rpcz_store.cc:269] 1025 14:08:47.079386 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.079456 (+ 70us) service_pool.cc:225] Handling call
1025 14:08:47.158641 (+ 79185us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.233757 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 89) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.233840 837 rpcz_store.cc:269] 1025 14:08:47.156147 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.156205 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:47.233742 (+ 77537us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.316458 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 90) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.316545 838 rpcz_store.cc:269] 1025 14:08:47.233395 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.233460 (+ 65us) service_pool.cc:225] Handling call
1025 14:08:47.316443 (+ 82983us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.389890 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 91) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.389969 837 rpcz_store.cc:269] 1025 14:08:47.310536 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.310594 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:47.389877 (+ 79283us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.465220 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 92) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.465294 838 rpcz_store.cc:269] 1025 14:08:47.387241 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.387301 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:47.465209 (+ 77908us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.550086 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 93) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.550168 837 rpcz_store.cc:269] 1025 14:08:47.463527 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.463590 (+ 63us) service_pool.cc:225] Handling call
1025 14:08:47.550073 (+ 86483us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.621680 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 94) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.621769 838 rpcz_store.cc:269] 1025 14:08:47.539928 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.539991 (+ 63us) service_pool.cc:225] Handling call
1025 14:08:47.621667 (+ 81676us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.692463 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 95) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.692552 837 rpcz_store.cc:269] 1025 14:08:47.616832 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.616896 (+ 64us) service_pool.cc:225] Handling call
1025 14:08:47.692450 (+ 75554us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.777150 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 96) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.777264 837 rpcz_store.cc:269] 1025 14:08:47.693983 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.694039 (+ 56us) service_pool.cc:225] Handling call
1025 14:08:47.777137 (+ 83098us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.845855 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 97) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.845930 838 rpcz_store.cc:269] 1025 14:08:47.770267 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.770325 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:47.845841 (+ 75516us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:47.926467 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 98) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:47.926544 838 rpcz_store.cc:269] 1025 14:08:47.847358 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.847389 (+ 31us) service_pool.cc:225] Handling call
1025 14:08:47.926456 (+ 79067us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.002888 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 99) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.002971 837 rpcz_store.cc:269] 1025 14:08:47.923734 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:47.923793 (+ 59us) service_pool.cc:225] Handling call
1025 14:08:48.002875 (+ 79082us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.079248 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 100) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.079327 838 rpcz_store.cc:269] 1025 14:08:48.000295 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.000357 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:48.079235 (+ 78878us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.153189 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 101) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.153268 837 rpcz_store.cc:269] 1025 14:08:48.076639 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.076690 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:48.153163 (+ 76473us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.240132 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 102) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.240219 837 rpcz_store.cc:269] 1025 14:08:48.154733 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.154765 (+ 32us) service_pool.cc:225] Handling call
1025 14:08:48.240119 (+ 85354us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.312507 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 103) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.312597 838 rpcz_store.cc:269] 1025 14:08:48.231202 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.231260 (+ 58us) service_pool.cc:225] Handling call
1025 14:08:48.312495 (+ 81235us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.396610 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 104) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.396696 837 rpcz_store.cc:269] 1025 14:08:48.307955 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.308017 (+ 62us) service_pool.cc:225] Handling call
1025 14:08:48.396597 (+ 88580us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.469401 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 105) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.469485 838 rpcz_store.cc:269] 1025 14:08:48.384453 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.384519 (+ 66us) service_pool.cc:225] Handling call
1025 14:08:48.469388 (+ 84869us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.547416 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 106) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.547505 837 rpcz_store.cc:269] 1025 14:08:48.461322 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.461385 (+ 63us) service_pool.cc:225] Handling call
1025 14:08:48.547403 (+ 86018us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.615074 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 107) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.615199 838 rpcz_store.cc:269] 1025 14:08:48.538099 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.538149 (+ 50us) service_pool.cc:225] Handling call
1025 14:08:48.615057 (+ 76908us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.692071 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 108) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.692169 838 rpcz_store.cc:269] 1025 14:08:48.615362 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.615400 (+ 38us) service_pool.cc:225] Handling call
1025 14:08:48.692054 (+ 76654us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.775630 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 109) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.775710 837 rpcz_store.cc:269] 1025 14:08:48.691906 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.691960 (+ 54us) service_pool.cc:225] Handling call
1025 14:08:48.775617 (+ 83657us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.844152 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 110) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.844234 838 rpcz_store.cc:269] 1025 14:08:48.768806 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.768848 (+ 42us) service_pool.cc:225] Handling call
1025 14:08:48.844136 (+ 75288us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:48.925936 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 111) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:48.926026 838 rpcz_store.cc:269] 1025 14:08:48.845673 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.845723 (+ 50us) service_pool.cc:225] Handling call
1025 14:08:48.925921 (+ 80198us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.003994 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 112) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.004067 837 rpcz_store.cc:269] 1025 14:08:48.922087 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.922143 (+ 56us) service_pool.cc:225] Handling call
1025 14:08:49.003982 (+ 81839us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.074935 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 113) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.075003 838 rpcz_store.cc:269] 1025 14:08:48.998709 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:48.998733 (+ 24us) service_pool.cc:225] Handling call
1025 14:08:49.074921 (+ 76188us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.154384 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 114) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.154461 838 rpcz_store.cc:269] 1025 14:08:49.076441 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.076481 (+ 40us) service_pool.cc:225] Handling call
1025 14:08:49.154372 (+ 77891us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.230866 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 115) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.230942 837 rpcz_store.cc:269] 1025 14:08:49.152795 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.152862 (+ 67us) service_pool.cc:225] Handling call
1025 14:08:49.230853 (+ 77991us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.310504 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 116) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.310590 838 rpcz_store.cc:269] 1025 14:08:49.229240 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.229300 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:49.310491 (+ 81191us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.392457 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 117) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.392547 837 rpcz_store.cc:269] 1025 14:08:49.305689 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.305742 (+ 53us) service_pool.cc:225] Handling call
1025 14:08:49.392443 (+ 86701us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.463069 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 118) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.463152 838 rpcz_store.cc:269] 1025 14:08:49.382311 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.382347 (+ 36us) service_pool.cc:225] Handling call
1025 14:08:49.463056 (+ 80709us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.544798 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 119) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.544880 837 rpcz_store.cc:269] 1025 14:08:49.459209 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.459285 (+ 76us) service_pool.cc:225] Handling call
1025 14:08:49.544784 (+ 85499us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.620474 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 120) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.620551 838 rpcz_store.cc:269] 1025 14:08:49.535877 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.535918 (+ 41us) service_pool.cc:225] Handling call
1025 14:08:49.620461 (+ 84543us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.695281 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 121) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.695356 837 rpcz_store.cc:269] 1025 14:08:49.612568 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.612635 (+ 67us) service_pool.cc:225] Handling call
1025 14:08:49.695269 (+ 82634us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.767443 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 122) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.767527 838 rpcz_store.cc:269] 1025 14:08:49.689327 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.689413 (+ 86us) service_pool.cc:225] Handling call
1025 14:08:49.767429 (+ 78016us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.849130 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 123) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.849215 837 rpcz_store.cc:269] 1025 14:08:49.765884 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.765949 (+ 65us) service_pool.cc:225] Handling call
1025 14:08:49.849118 (+ 83169us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.923394 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 124) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.923471 838 rpcz_store.cc:269] 1025 14:08:49.842211 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.842254 (+ 43us) service_pool.cc:225] Handling call
1025 14:08:49.923380 (+ 81126us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:49.999657 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 125) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:08:49.999734 837 rpcz_store.cc:269] 1025 14:08:49.918861 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.918921 (+ 60us) service_pool.cc:225] Handling call
1025 14:08:49.999644 (+ 80723us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.071749 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 126) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.071825 838 rpcz_store.cc:269] 1025 14:08:49.995095 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:49.995146 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:50.071736 (+ 76590us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.152976 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 127) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.153092 837 rpcz_store.cc:269] 1025 14:08:50.071339 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.071458 (+ 119us) service_pool.cc:225] Handling call
1025 14:08:50.152962 (+ 81504us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.234109 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 128) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.234190 838 rpcz_store.cc:269] 1025 14:08:50.148404 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.148443 (+ 39us) service_pool.cc:225] Handling call
1025 14:08:50.234096 (+ 85653us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.308810 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 129) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.308888 837 rpcz_store.cc:269] 1025 14:08:50.224668 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.224729 (+ 61us) service_pool.cc:225] Handling call
1025 14:08:50.308797 (+ 84068us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.389922 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 130) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.390004 838 rpcz_store.cc:269] 1025 14:08:50.301851 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.301884 (+ 33us) service_pool.cc:225] Handling call
1025 14:08:50.389909 (+ 88025us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.454612 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 131) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.454689 837 rpcz_store.cc:269] 1025 14:08:50.378507 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.378551 (+ 44us) service_pool.cc:225] Handling call
1025 14:08:50.454596 (+ 76045us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.530778 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 132) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.530869 837 rpcz_store.cc:269] 1025 14:08:50.456058 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.456090 (+ 32us) service_pool.cc:225] Handling call
1025 14:08:50.530764 (+ 74674us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.606935 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 133) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.607020 837 rpcz_store.cc:269] 1025 14:08:50.532347 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.532402 (+ 55us) service_pool.cc:225] Handling call
1025 14:08:50.606919 (+ 74517us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.683823 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 134) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.683902 837 rpcz_store.cc:269] 1025 14:08:50.608463 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.608498 (+ 35us) service_pool.cc:225] Handling call
1025 14:08:50.683808 (+ 75310us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:08:50.743456 31499 txn_status_manager.cc:765] Waiting for 1 task(s) to stop
W20251025 14:08:50.774094 837 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 135) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.774170 837 rpcz_store.cc:269] 1025 14:08:50.685296 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.685360 (+ 64us) service_pool.cc:225] Handling call
1025 14:08:50.774081 (+ 88721us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:08:50.801762 965 meta_cache.cc:302] tablet 47bcb429014749a081956d281ce2c3b6: replica b13151cc93314ac9994132675db23ba7 (127.30.194.193:37113) has failed: Network error: Client connection negotiation failed: client connection to 127.30.194.193:37113: connect: Connection refused (error 111)
I20251025 14:08:50.805222 31499 tablet_server.cc:195] TabletServer@127.30.194.193:37113 shutdown complete.
I20251025 14:08:50.806455 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:08:50.807916 1121 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:50.807987 1123 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:08:50.807911 1120 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:08:50.808107 31499 server_base.cc:1047] running on GCE node
I20251025 14:08:50.808266 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:08:50.808306 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:08:50.808331 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401330808330 us; error 0 us; skew 500 ppm
I20251025 14:08:50.808900 31499 webserver.cc:492] Webserver started at http://127.30.194.193:36873/ using document root <none> and password file <none>
I20251025 14:08:50.808979 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:08:50.809051 31499 fs_manager.cc:365] Using existing metadata directory in first data directory
I20251025 14:08:50.809577 31499 fs_manager.cc:714] Time spent opening directory manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:50.810190 1128 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:08:50.810295 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.001s
I20251025 14:08:50.810346 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "b13151cc93314ac9994132675db23ba7"
format_stamp: "Formatted at 2025-10-25 14:08:39 on dist-test-slave-v4l2"
I20251025 14:08:50.810397 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommitting.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:08:50.819643 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:08:50.819805 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:08:50.820231 1136 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20251025 14:08:50.821890 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20251025 14:08:50.821943 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.002s user 0.000s sys 0.000s
I20251025 14:08:50.821969 31499 ts_tablet_manager.cc:600] Registering tablets (0/3 complete)
I20251025 14:08:50.822507 1136 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:50.822927 31499 ts_tablet_manager.cc:616] Registered 3 tablets
I20251025 14:08:50.822964 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:50.826283 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:37113
I20251025 14:08:50.827359 1196 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:37113 every 8 connection(s)
I20251025 14:08:50.827631 1197 heartbeater.cc:344] Connected to a master server at 127.30.194.254:45321
I20251025 14:08:50.827756 1197 heartbeater.cc:461] Registering TS with master...
I20251025 14:08:50.827919 1197 heartbeater.cc:507] Master 127.30.194.254:45321 requested a full tablet report, sending...
I20251025 14:08:50.828235 802 ts_manager.cc:194] Re-registered known tserver with Master: b13151cc93314ac9994132675db23ba7 (127.30.194.193:37113)
I20251025 14:08:50.828732 802 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:53638
I20251025 14:08:50.836544 1136 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap replayed 1/1 log segments. Stats: ops{read=97 overwritten=0 applied=97 ignored=0} inserts{seen=2271 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:50.836798 1136 tablet_bootstrap.cc:492] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Bootstrap complete.
I20251025 14:08:50.836918 1136 ts_tablet_manager.cc:1403] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.014s user 0.005s sys 0.004s
I20251025 14:08:50.837074 1136 raft_consensus.cc:359] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.837144 1136 raft_consensus.cc:740] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:50.837193 1136 consensus_queue.cc:260] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 97, Last appended: 2.97, Last appended by leader: 97, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.837232 1136 raft_consensus.cc:399] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:50.837260 1136 raft_consensus.cc:493] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:50.837286 1136 raft_consensus.cc:3060] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Advancing to term 3
I20251025 14:08:50.837800 1136 raft_consensus.cc:515] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.837857 1136 leader_election.cc:304] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:50.837971 1136 leader_election.cc:290] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 3 election: Requested vote from peers
I20251025 14:08:50.838032 1205 raft_consensus.cc:2804] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Leader election won for term 3
I20251025 14:08:50.838106 1136 ts_tablet_manager.cc:1434] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.004s sys 0.001s
I20251025 14:08:50.838176 1205 raft_consensus.cc:697] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 3 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:50.838222 1197 heartbeater.cc:499] Master 127.30.194.254:45321 was elected leader, sending a full tablet report...
I20251025 14:08:50.838270 1205 consensus_queue.cc:237] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 97, Committed index: 97, Last appended: 2.97, Last appended by leader: 97, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.838413 1136 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:50.838723 802 catalog_manager.cc:5649] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:50.839864 1136 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=3 ignored=0} mutations{seen=1 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:50.840101 1136 tablet_bootstrap.cc:492] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Bootstrap complete.
I20251025 14:08:50.840189 1136 ts_tablet_manager.cc:1403] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.002s user 0.000s sys 0.000s
I20251025 14:08:50.840299 1136 raft_consensus.cc:359] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.840350 1136 raft_consensus.cc:740] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:50.840389 1136 consensus_queue.cc:260] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 2.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.840426 1136 raft_consensus.cc:399] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:50.840472 1136 raft_consensus.cc:493] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:50.840502 1136 raft_consensus.cc:3060] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Advancing to term 3
I20251025 14:08:50.841061 1136 raft_consensus.cc:515] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.841116 1136 leader_election.cc:304] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:50.841151 1136 leader_election.cc:290] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 3 election: Requested vote from peers
I20251025 14:08:50.841198 1207 raft_consensus.cc:2804] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Leader election won for term 3
I20251025 14:08:50.841230 1136 ts_tablet_manager.cc:1434] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.000s sys 0.002s
I20251025 14:08:50.841275 1207 raft_consensus.cc:697] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 3 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:50.841317 1136 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap starting.
I20251025 14:08:50.841318 1207 consensus_queue.cc:237] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 2.6, Last appended by leader: 6, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.841600 1206 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } } }
I20251025 14:08:50.841663 1206 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:50.841660 1205 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: New leader b13151cc93314ac9994132675db23ba7. Latest consensus state: current_term: 3 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } } }
I20251025 14:08:50.841724 1205 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:08:50.842034 802 catalog_manager.cc:5649] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
I20251025 14:08:50.842214 1216 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:08:50.842263 1216 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:08:50.842384 1216 txn_status_manager.cc:716] Starting 1 commit tasks
W20251025 14:08:50.844969 1209 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:39220: Illegal state: Transaction 0 commit already in progress
W20251025 14:08:50.847509 838 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60474 (request call id 136) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:08:50.847558 838 rpcz_store.cc:269] 1025 14:08:50.761689 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:08:50.761740 (+ 51us) service_pool.cc:225] Handling call
1025 14:08:50.847503 (+ 85763us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:08:50.853121 1136 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap replayed 1/1 log segments. Stats: ops{read=100 overwritten=0 applied=100 ignored=0} inserts{seen=2239 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:08:50.853348 1136 tablet_bootstrap.cc:492] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Bootstrap complete.
I20251025 14:08:50.853448 1136 ts_tablet_manager.cc:1403] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Time spent bootstrapping tablet: real 0.012s user 0.007s sys 0.005s
I20251025 14:08:50.853536 1136 raft_consensus.cc:359] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.853576 1136 raft_consensus.cc:740] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: b13151cc93314ac9994132675db23ba7, State: Initialized, Role: FOLLOWER
I20251025 14:08:50.853610 1136 consensus_queue.cc:260] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 100, Last appended: 2.100, Last appended by leader: 100, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.853657 1136 raft_consensus.cc:399] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:08:50.853682 1136 raft_consensus.cc:493] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:08:50.853696 1136 raft_consensus.cc:3060] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 2 FOLLOWER]: Advancing to term 3
I20251025 14:08:50.854154 1136 raft_consensus.cc:515] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.854210 1136 leader_election.cc:304] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b13151cc93314ac9994132675db23ba7; no voters:
I20251025 14:08:50.854249 1136 leader_election.cc:290] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [CANDIDATE]: Term 3 election: Requested vote from peers
I20251025 14:08:50.854310 1206 raft_consensus.cc:2804] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Leader election won for term 3
I20251025 14:08:50.854327 1136 ts_tablet_manager.cc:1434] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:08:50.854394 1206 raft_consensus.cc:697] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 3 LEADER]: Becoming Leader. State: Replica: b13151cc93314ac9994132675db23ba7, State: Running, Role: LEADER
I20251025 14:08:50.854445 1206 consensus_queue.cc:237] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 100, Committed index: 100, Last appended: 2.100, Last appended by leader: 100, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } }
I20251025 14:08:50.854902 802 catalog_manager.cc:5649] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 reported cstate change: term changed from 1 to 3. New cstate: current_term: 3 leader_uuid: "b13151cc93314ac9994132675db23ba7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b13151cc93314ac9994132675db23ba7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 37113 } health_report { overall_health: HEALTHY } } }
W20251025 14:08:50.858167 1221 fault_injection.cc:43] FAULT INJECTION ENABLED!
W20251025 14:08:50.858210 1221 fault_injection.cc:44] THIS SERVER MAY CRASH!
I20251025 14:09:04.896371 31499 tablet_server.cc:178] TabletServer@127.30.194.193:37113 shutting down...
I20251025 14:09:04.899068 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:09:04.899175 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:04.899243 31499 raft_consensus.cc:2243] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 3 LEADER]: Raft consensus shutting down.
I20251025 14:09:04.899291 31499 raft_consensus.cc:2272] T 9e3e5c3fe0464c7da45fd76b15b27e18 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:04.899456 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:04.899487 31499 raft_consensus.cc:2243] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 3 LEADER]: Raft consensus shutting down.
I20251025 14:09:04.899506 31499 raft_consensus.cc:2272] T 86a059ba53f645fc9c1884eb8dac8e64 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:04.899613 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:04.899643 31499 raft_consensus.cc:2243] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 3 LEADER]: Raft consensus shutting down.
I20251025 14:09:04.899669 31499 raft_consensus.cc:2272] T 47bcb429014749a081956d281ce2c3b6 P b13151cc93314ac9994132675db23ba7 [term 3 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:04.911423 31499 tablet_server.cc:195] TabletServer@127.30.194.193:37113 shutdown complete.
I20251025 14:09:04.912942 31499 master.cc:561] Master@127.30.194.254:45321 shutting down...
I20251025 14:09:04.916285 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:04.916349 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:04.916368 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 8a84de90551c4dac8b32c22b23e1f4c5: stopping tablet replica
I20251025 14:09:04.928508 31499 master.cc:583] Master@127.30.194.254:45321 shutdown complete.
[ OK ] TxnCommitITest.TestRestartingWhileCommitting (25468 ms)
[ RUN ] TxnCommitITest.TestAbortRacingWithBotchedCommit
I20251025 14:09:04.932525 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:40045
I20251025 14:09:04.932658 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:04.934063 1237 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:04.934134 1235 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:04.934058 1234 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:04.934325 31499 server_base.cc:1047] running on GCE node
I20251025 14:09:04.934398 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:04.934420 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:04.934432 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401344934433 us; error 0 us; skew 500 ppm
I20251025 14:09:04.934966 31499 webserver.cc:492] Webserver started at http://127.30.194.254:40901/ using document root <none> and password file <none>
I20251025 14:09:04.935032 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:04.935078 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:09:04.935113 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:09:04.935338 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61"
format_stamp: "Formatted at 2025-10-25 14:09:04 on dist-test-slave-v4l2"
I20251025 14:09:04.936157 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:04.936560 1242 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:04.936687 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:04.936738 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61"
format_stamp: "Formatted at 2025-10-25 14:09:04 on dist-test-slave-v4l2"
I20251025 14:09:04.936771 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:04.941520 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:04.941665 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:04.945086 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:40045
I20251025 14:09:04.946039 1304 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:40045 every 8 connection(s)
I20251025 14:09:04.946170 1305 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:04.947160 1305 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: Bootstrap starting.
I20251025 14:09:04.947446 1305 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:04.947963 1305 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: No bootstrap required, opened a new log
I20251025 14:09:04.948096 1305 raft_consensus.cc:359] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" member_type: VOTER }
I20251025 14:09:04.948149 1305 raft_consensus.cc:385] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:04.948163 1305 raft_consensus.cc:740] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 387d6ad6b6ad421b86e6de2ddd1c8a61, State: Initialized, Role: FOLLOWER
I20251025 14:09:04.948216 1305 consensus_queue.cc:260] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" member_type: VOTER }
I20251025 14:09:04.948267 1305 raft_consensus.cc:399] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:04.948287 1305 raft_consensus.cc:493] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:04.948314 1305 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:04.948774 1305 raft_consensus.cc:515] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" member_type: VOTER }
I20251025 14:09:04.948827 1305 leader_election.cc:304] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 387d6ad6b6ad421b86e6de2ddd1c8a61; no voters:
I20251025 14:09:04.948921 1305 leader_election.cc:290] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:04.948963 1308 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:04.949081 1305 sys_catalog.cc:565] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:09:04.949132 1308 raft_consensus.cc:697] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 1 LEADER]: Becoming Leader. State: Replica: 387d6ad6b6ad421b86e6de2ddd1c8a61, State: Running, Role: LEADER
I20251025 14:09:04.949200 1308 consensus_queue.cc:237] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" member_type: VOTER }
I20251025 14:09:04.949394 1308 sys_catalog.cc:455] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 387d6ad6b6ad421b86e6de2ddd1c8a61. Latest consensus state: current_term: 1 leader_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" member_type: VOTER } }
I20251025 14:09:04.949405 1309 sys_catalog.cc:455] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "387d6ad6b6ad421b86e6de2ddd1c8a61" member_type: VOTER } }
I20251025 14:09:04.949637 1309 sys_catalog.cc:458] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:04.949620 1308 sys_catalog.cc:458] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:04.950356 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
W20251025 14:09:04.950436 1324 catalog_manager.cc:1568] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20251025 14:09:04.950479 1324 catalog_manager.cc:883] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20251025 14:09:04.950909 1317 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:09:04.951033 1317 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:09:04.951663 1317 catalog_manager.cc:1357] Generated new cluster ID: 840e5e8b136440cfa9a5ce5fd02e95bf
I20251025 14:09:04.951719 1317 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:09:04.983280 1317 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:09:04.983701 1317 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:09:04.987468 1317 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: Generated new TSK 0
I20251025 14:09:04.987542 1317 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:09:04.992065 1259 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:36828:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:05.014269 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:05.015486 1332 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:05.015563 1335 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:05.015705 1333 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:05.015751 31499 server_base.cc:1047] running on GCE node
I20251025 14:09:05.015877 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:05.015915 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:05.015929 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401345015929 us; error 0 us; skew 500 ppm
I20251025 14:09:05.016517 31499 webserver.cc:492] Webserver started at http://127.30.194.193:34425/ using document root <none> and password file <none>
I20251025 14:09:05.016587 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:05.016620 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:09:05.016659 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:09:05.016901 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "f56b987beee8475e9609a55c27458bd7"
format_stamp: "Formatted at 2025-10-25 14:09:05 on dist-test-slave-v4l2"
I20251025 14:09:05.017822 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:05.018335 1340 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:05.018535 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:05.018584 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "f56b987beee8475e9609a55c27458bd7"
format_stamp: "Formatted at 2025-10-25 14:09:05 on dist-test-slave-v4l2"
I20251025 14:09:05.018631 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:05.051893 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:05.052101 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:05.052515 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:09:05.052552 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:05.052588 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:09:05.052655 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:05.055899 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:42789
I20251025 14:09:05.055948 1405 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:42789 every 8 connection(s)
I20251025 14:09:05.056350 1406 heartbeater.cc:344] Connected to a master server at 127.30.194.254:40045
I20251025 14:09:05.056399 1406 heartbeater.cc:461] Registering TS with master...
I20251025 14:09:05.056515 1406 heartbeater.cc:507] Master 127.30.194.254:40045 requested a full tablet report, sending...
I20251025 14:09:05.056713 1259 ts_manager.cc:194] Registered new tserver with Master: f56b987beee8475e9609a55c27458bd7 (127.30.194.193:42789)
I20251025 14:09:05.057216 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001129428s
I20251025 14:09:05.057528 1259 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:36834
I20251025 14:09:05.997152 1259 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34190:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:06.001071 1370 tablet_service.cc:1505] Processing CreateTablet for tablet 2febb5a3c9d34766ad6b6de2165e8d4f (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=2a5f7817411448a4a0bc0881b74a5d67]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:09:06.001199 1370 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2febb5a3c9d34766ad6b6de2165e8d4f. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:06.002378 1426 tablet_bootstrap.cc:492] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: Bootstrap starting.
I20251025 14:09:06.002692 1426 tablet_bootstrap.cc:654] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:06.003172 1426 tablet_bootstrap.cc:492] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: No bootstrap required, opened a new log
I20251025 14:09:06.003219 1426 ts_tablet_manager.cc:1403] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:06.003305 1426 raft_consensus.cc:359] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.003351 1426 raft_consensus.cc:385] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:06.003365 1426 raft_consensus.cc:740] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Initialized, Role: FOLLOWER
I20251025 14:09:06.003403 1426 consensus_queue.cc:260] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.003469 1426 raft_consensus.cc:399] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:06.003487 1426 raft_consensus.cc:493] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:06.003506 1426 raft_consensus.cc:3060] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:06.003988 1426 raft_consensus.cc:515] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.004040 1426 leader_election.cc:304] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f56b987beee8475e9609a55c27458bd7; no voters:
I20251025 14:09:06.004127 1426 leader_election.cc:290] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:06.004184 1428 raft_consensus.cc:2804] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:06.004283 1426 ts_tablet_manager.cc:1434] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:06.004328 1428 raft_consensus.cc:697] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Becoming Leader. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Running, Role: LEADER
I20251025 14:09:06.004428 1406 heartbeater.cc:499] Master 127.30.194.254:40045 was elected leader, sending a full tablet report...
I20251025 14:09:06.004431 1428 consensus_queue.cc:237] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.004714 1429 tablet_replica.cc:442] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } } }
I20251025 14:09:06.004765 1430 tablet_replica.cc:442] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: TxnStatusTablet state changed. Reason: New leader f56b987beee8475e9609a55c27458bd7. Latest consensus state: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } } }
I20251025 14:09:06.004833 1429 tablet_replica.cc:445] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:06.004860 1430 tablet_replica.cc:445] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:06.004947 1259 catalog_manager.cc:5649] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 reported cstate change: term changed from 0 to 1, leader changed from <none> to f56b987beee8475e9609a55c27458bd7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:06.005120 1433 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:09:06.005162 1433 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:09:06.091665 31499 test_util.cc:276] Using random seed: 898263695
I20251025 14:09:06.096159 1259 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34208:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:09:06.097528 1370 tablet_service.cc:1505] Processing CreateTablet for tablet 352c2742ad704273a2fc395a87aba73d (DEFAULT_TABLE table=test-workload [id=00e9e2b41e35406cbe0bf89d2f688b23]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:06.097540 1369 tablet_service.cc:1505] Processing CreateTablet for tablet 1083d9362bfd46e6899720a89b08f820 (DEFAULT_TABLE table=test-workload [id=00e9e2b41e35406cbe0bf89d2f688b23]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:06.097651 1369 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1083d9362bfd46e6899720a89b08f820. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:06.097750 1370 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 352c2742ad704273a2fc395a87aba73d. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:06.098932 1426 tablet_bootstrap.cc:492] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7: Bootstrap starting.
I20251025 14:09:06.099218 1426 tablet_bootstrap.cc:654] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:06.099671 1426 tablet_bootstrap.cc:492] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7: No bootstrap required, opened a new log
I20251025 14:09:06.099710 1426 ts_tablet_manager.cc:1403] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:06.099828 1426 raft_consensus.cc:359] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.099872 1426 raft_consensus.cc:385] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:06.099886 1426 raft_consensus.cc:740] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Initialized, Role: FOLLOWER
I20251025 14:09:06.099931 1426 consensus_queue.cc:260] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.099972 1426 raft_consensus.cc:399] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:06.099987 1426 raft_consensus.cc:493] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:06.100013 1426 raft_consensus.cc:3060] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:06.100507 1426 raft_consensus.cc:515] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.100561 1426 leader_election.cc:304] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f56b987beee8475e9609a55c27458bd7; no voters:
I20251025 14:09:06.100603 1426 leader_election.cc:290] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:06.100672 1429 raft_consensus.cc:2804] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:06.100682 1426 ts_tablet_manager.cc:1434] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:06.100739 1429 raft_consensus.cc:697] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Becoming Leader. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Running, Role: LEADER
I20251025 14:09:06.100821 1426 tablet_bootstrap.cc:492] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7: Bootstrap starting.
I20251025 14:09:06.100807 1429 consensus_queue.cc:237] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.101243 1426 tablet_bootstrap.cc:654] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:06.101338 1259 catalog_manager.cc:5649] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 reported cstate change: term changed from 0 to 1, leader changed from <none> to f56b987beee8475e9609a55c27458bd7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:06.101843 1426 tablet_bootstrap.cc:492] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7: No bootstrap required, opened a new log
I20251025 14:09:06.101897 1426 ts_tablet_manager.cc:1403] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:06.102022 1426 raft_consensus.cc:359] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.102062 1426 raft_consensus.cc:385] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:06.102077 1426 raft_consensus.cc:740] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Initialized, Role: FOLLOWER
I20251025 14:09:06.102108 1426 consensus_queue.cc:260] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.102150 1426 raft_consensus.cc:399] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:06.102174 1426 raft_consensus.cc:493] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:06.102201 1426 raft_consensus.cc:3060] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:06.102643 1426 raft_consensus.cc:515] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.102690 1426 leader_election.cc:304] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f56b987beee8475e9609a55c27458bd7; no voters:
I20251025 14:09:06.102718 1426 leader_election.cc:290] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:06.102757 1429 raft_consensus.cc:2804] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:06.102789 1426 ts_tablet_manager.cc:1434] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:06.102804 1429 raft_consensus.cc:697] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Becoming Leader. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Running, Role: LEADER
I20251025 14:09:06.102838 1429 consensus_queue.cc:237] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.103260 1259 catalog_manager.cc:5649] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 reported cstate change: term changed from 0 to 1, leader changed from <none> to f56b987beee8475e9609a55c27458bd7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:06.155100 31499 test_util.cc:276] Using random seed: 898327123
I20251025 14:09:06.159884 1259 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34222:
name: "default.second_table"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:09:06.161207 1369 tablet_service.cc:1505] Processing CreateTablet for tablet 9dd3cd81d22346488c1b3c3d5c5de7a7 (DEFAULT_TABLE table=default.second_table [id=4fa661d8f3f04a21824a1d0ffbeba048]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:06.161207 1370 tablet_service.cc:1505] Processing CreateTablet for tablet 1bb3ce5cf6944fe2a4be4eb3ff3b30fa (DEFAULT_TABLE table=default.second_table [id=4fa661d8f3f04a21824a1d0ffbeba048]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:06.161362 1370 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1bb3ce5cf6944fe2a4be4eb3ff3b30fa. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:06.161425 1369 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9dd3cd81d22346488c1b3c3d5c5de7a7. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:06.162668 1426 tablet_bootstrap.cc:492] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Bootstrap starting.
I20251025 14:09:06.163048 1426 tablet_bootstrap.cc:654] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:06.163537 1426 tablet_bootstrap.cc:492] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: No bootstrap required, opened a new log
I20251025 14:09:06.163573 1426 ts_tablet_manager.cc:1403] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:06.163678 1426 raft_consensus.cc:359] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.163724 1426 raft_consensus.cc:385] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:06.163738 1426 raft_consensus.cc:740] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Initialized, Role: FOLLOWER
I20251025 14:09:06.163774 1426 consensus_queue.cc:260] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.163817 1426 raft_consensus.cc:399] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:06.163839 1426 raft_consensus.cc:493] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:06.163854 1426 raft_consensus.cc:3060] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:06.164299 1426 raft_consensus.cc:515] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.164353 1426 leader_election.cc:304] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f56b987beee8475e9609a55c27458bd7; no voters:
I20251025 14:09:06.164386 1426 leader_election.cc:290] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:06.164458 1426 ts_tablet_manager.cc:1434] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:06.164464 1430 raft_consensus.cc:2804] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:06.164565 1430 raft_consensus.cc:697] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Becoming Leader. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Running, Role: LEADER
I20251025 14:09:06.164593 1430 consensus_queue.cc:237] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.164788 1426 tablet_bootstrap.cc:492] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Bootstrap starting.
I20251025 14:09:06.165205 1426 tablet_bootstrap.cc:654] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:06.165732 1259 catalog_manager.cc:5649] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 reported cstate change: term changed from 0 to 1, leader changed from <none> to f56b987beee8475e9609a55c27458bd7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:06.165889 1426 tablet_bootstrap.cc:492] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: No bootstrap required, opened a new log
I20251025 14:09:06.165936 1426 ts_tablet_manager.cc:1403] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:06.166039 1426 raft_consensus.cc:359] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.166074 1426 raft_consensus.cc:385] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:06.166091 1426 raft_consensus.cc:740] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Initialized, Role: FOLLOWER
I20251025 14:09:06.166134 1426 consensus_queue.cc:260] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.166177 1426 raft_consensus.cc:399] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:06.166206 1426 raft_consensus.cc:493] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:06.166224 1426 raft_consensus.cc:3060] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:06.166774 1426 raft_consensus.cc:515] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.166839 1426 leader_election.cc:304] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f56b987beee8475e9609a55c27458bd7; no voters:
I20251025 14:09:06.166883 1426 leader_election.cc:290] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:06.167004 1429 raft_consensus.cc:2804] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:06.167008 1426 ts_tablet_manager.cc:1434] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Time spent starting tablet: real 0.001s user 0.001s sys 0.001s
I20251025 14:09:06.167135 1429 raft_consensus.cc:697] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Becoming Leader. State: Replica: f56b987beee8475e9609a55c27458bd7, State: Running, Role: LEADER
I20251025 14:09:06.167186 1429 consensus_queue.cc:237] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } }
I20251025 14:09:06.167690 1259 catalog_manager.cc:5649] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 reported cstate change: term changed from 0 to 1, leader changed from <none> to f56b987beee8475e9609a55c27458bd7 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "f56b987beee8475e9609a55c27458bd7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f56b987beee8475e9609a55c27458bd7" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42789 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:06.247896 1258 catalog_manager.cc:2507] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:34202:
table { table_name: "default.second_table" } modify_external_catalogs: true
I20251025 14:09:06.247989 1258 catalog_manager.cc:2755] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:34202:
table { table_name: "default.second_table" } modify_external_catalogs: true
I20251025 14:09:06.248517 1258 catalog_manager.cc:5936] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: Sending DeleteTablet for 1 replicas of tablet 9dd3cd81d22346488c1b3c3d5c5de7a7
I20251025 14:09:06.248589 1258 catalog_manager.cc:5936] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: Sending DeleteTablet for 1 replicas of tablet 1bb3ce5cf6944fe2a4be4eb3ff3b30fa
I20251025 14:09:06.248749 1369 tablet_service.cc:1552] Processing DeleteTablet for tablet 9dd3cd81d22346488c1b3c3d5c5de7a7 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:09:06 UTC) from {username='slave'} at 127.0.0.1:60510
I20251025 14:09:06.248754 1370 tablet_service.cc:1552] Processing DeleteTablet for tablet 1bb3ce5cf6944fe2a4be4eb3ff3b30fa with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:09:06 UTC) from {username='slave'} at 127.0.0.1:60510
I20251025 14:09:06.248970 1482 tablet_replica.cc:333] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: stopping tablet replica
I20251025 14:09:06.249060 1482 raft_consensus.cc:2243] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:06.249110 1482 raft_consensus.cc:2272] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:06.249406 1482 ts_tablet_manager.cc:1916] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Deleting tablet data with delete state TABLET_DATA_DELETED
W20251025 14:09:06.250167 1411 meta_cache.cc:788] Not found: LookupRpcById { tablet: '9dd3cd81d22346488c1b3c3d5c5de7a7', attempt: 1 } failed
I20251025 14:09:06.250221 1411 txn_status_manager.cc:244] Participant 9dd3cd81d22346488c1b3c3d5c5de7a7 of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '9dd3cd81d22346488c1b3c3d5c5de7a7', attempt: 1 } failed
I20251025 14:09:06.250268 1411 txn_status_manager.cc:244] Participant 1bb3ce5cf6944fe2a4be4eb3ff3b30fa of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '1bb3ce5cf6944fe2a4be4eb3ff3b30fa', attempt: 1 } failed
I20251025 14:09:06.250875 1410 txn_status_manager.cc:211] Scheduling ABORT_TXNs on participants for txn 0
I20251025 14:09:06.251240 1482 ts_tablet_manager.cc:1929] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.100
I20251025 14:09:06.251313 1482 log.cc:1199] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/9dd3cd81d22346488c1b3c3d5c5de7a7
I20251025 14:09:06.251436 1411 txn_status_manager.cc:337] Participant 1bb3ce5cf6944fe2a4be4eb3ff3b30fa was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: '1bb3ce5cf6944fe2a4be4eb3ff3b30fa', attempt: 1 } failed
I20251025 14:09:06.251502 1411 txn_status_manager.cc:337] Participant 9dd3cd81d22346488c1b3c3d5c5de7a7 was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: '9dd3cd81d22346488c1b3c3d5c5de7a7', attempt: 1 } failed
I20251025 14:09:06.251595 1482 ts_tablet_manager.cc:1950] T 9dd3cd81d22346488c1b3c3d5c5de7a7 P f56b987beee8475e9609a55c27458bd7: Deleting consensus metadata
I20251025 14:09:06.251866 1245 catalog_manager.cc:4985] TS f56b987beee8475e9609a55c27458bd7 (127.30.194.193:42789): tablet 9dd3cd81d22346488c1b3c3d5c5de7a7 (table default.second_table [id=4fa661d8f3f04a21824a1d0ffbeba048]) successfully deleted
I20251025 14:09:06.251876 1482 tablet_replica.cc:333] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: stopping tablet replica
I20251025 14:09:06.251976 1482 raft_consensus.cc:2243] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:06.252019 1482 raft_consensus.cc:2272] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:06.252307 1482 ts_tablet_manager.cc:1916] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Deleting tablet data with delete state TABLET_DATA_DELETED
I20251025 14:09:06.253414 1482 ts_tablet_manager.cc:1929] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.104
I20251025 14:09:06.253474 1482 log.cc:1199] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestAbortRacingWithBotchedCommit.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/1bb3ce5cf6944fe2a4be4eb3ff3b30fa
I20251025 14:09:06.253741 1482 ts_tablet_manager.cc:1950] T 1bb3ce5cf6944fe2a4be4eb3ff3b30fa P f56b987beee8475e9609a55c27458bd7: Deleting consensus metadata
I20251025 14:09:06.254038 1245 catalog_manager.cc:4985] TS f56b987beee8475e9609a55c27458bd7 (127.30.194.193:42789): tablet 1bb3ce5cf6944fe2a4be4eb3ff3b30fa (table default.second_table [id=4fa661d8f3f04a21824a1d0ffbeba048]) successfully deleted
I20251025 14:09:06.254396 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:09:06.256840 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:09:06.257156 31499 tablet_replica.cc:333] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7: stopping tablet replica
I20251025 14:09:06.257223 31499 raft_consensus.cc:2243] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:06.257266 31499 raft_consensus.cc:2272] T 1083d9362bfd46e6899720a89b08f820 P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:06.257601 31499 tablet_replica.cc:333] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7: stopping tablet replica
I20251025 14:09:06.257647 31499 raft_consensus.cc:2243] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:06.257674 31499 raft_consensus.cc:2272] T 2febb5a3c9d34766ad6b6de2165e8d4f P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:06.257889 31499 tablet_replica.cc:333] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7: stopping tablet replica
I20251025 14:09:06.257933 31499 raft_consensus.cc:2243] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:06.257970 31499 raft_consensus.cc:2272] T 352c2742ad704273a2fc395a87aba73d P f56b987beee8475e9609a55c27458bd7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:06.270233 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:09:06.271520 31499 master.cc:561] Master@127.30.194.254:40045 shutting down...
I20251025 14:09:06.273989 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:06.274047 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:06.274066 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 387d6ad6b6ad421b86e6de2ddd1c8a61: stopping tablet replica
I20251025 14:09:06.285876 31499 master.cc:583] Master@127.30.194.254:40045 shutdown complete.
[ OK ] TxnCommitITest.TestAbortRacingWithBotchedCommit (1357 ms)
[ RUN ] TxnCommitITest.TestRestartingWhileCommittingAndDeleting
I20251025 14:09:06.289713 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:40799
I20251025 14:09:06.289858 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:06.291040 1485 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:06.291069 1484 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:06.291085 31499 server_base.cc:1047] running on GCE node
W20251025 14:09:06.291252 1487 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:06.291364 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:06.291400 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:06.291414 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401346291413 us; error 0 us; skew 500 ppm
I20251025 14:09:06.291883 31499 webserver.cc:492] Webserver started at http://127.30.194.254:35309/ using document root <none> and password file <none>
I20251025 14:09:06.291944 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:06.291973 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:09:06.292006 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:09:06.292238 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "23b75580e2dc495ab6b5273d2694976b"
format_stamp: "Formatted at 2025-10-25 14:09:06 on dist-test-slave-v4l2"
I20251025 14:09:06.293110 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:06.293653 1492 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:06.293849 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:06.293898 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "23b75580e2dc495ab6b5273d2694976b"
format_stamp: "Formatted at 2025-10-25 14:09:06 on dist-test-slave-v4l2"
I20251025 14:09:06.293932 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:06.300040 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:06.300182 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:06.302901 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:40799
I20251025 14:09:06.304972 1554 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:40799 every 8 connection(s)
I20251025 14:09:06.305111 1555 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:06.305975 1555 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: Bootstrap starting.
I20251025 14:09:06.306214 1555 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:06.306638 1555 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: No bootstrap required, opened a new log
I20251025 14:09:06.306728 1555 raft_consensus.cc:359] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "23b75580e2dc495ab6b5273d2694976b" member_type: VOTER }
I20251025 14:09:06.306769 1555 raft_consensus.cc:385] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:06.306782 1555 raft_consensus.cc:740] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 23b75580e2dc495ab6b5273d2694976b, State: Initialized, Role: FOLLOWER
I20251025 14:09:06.306818 1555 consensus_queue.cc:260] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "23b75580e2dc495ab6b5273d2694976b" member_type: VOTER }
I20251025 14:09:06.306864 1555 raft_consensus.cc:399] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:06.306890 1555 raft_consensus.cc:493] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:06.306910 1555 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:06.307325 1555 raft_consensus.cc:515] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "23b75580e2dc495ab6b5273d2694976b" member_type: VOTER }
I20251025 14:09:06.307377 1555 leader_election.cc:304] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 23b75580e2dc495ab6b5273d2694976b; no voters:
I20251025 14:09:06.307475 1555 leader_election.cc:290] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:06.307550 1558 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:06.307680 1555 sys_catalog.cc:565] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:09:06.307693 1558 raft_consensus.cc:697] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 1 LEADER]: Becoming Leader. State: Replica: 23b75580e2dc495ab6b5273d2694976b, State: Running, Role: LEADER
I20251025 14:09:06.307785 1558 consensus_queue.cc:237] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "23b75580e2dc495ab6b5273d2694976b" member_type: VOTER }
I20251025 14:09:06.308007 1559 sys_catalog.cc:455] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "23b75580e2dc495ab6b5273d2694976b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "23b75580e2dc495ab6b5273d2694976b" member_type: VOTER } }
I20251025 14:09:06.308014 1560 sys_catalog.cc:455] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [sys.catalog]: SysCatalogTable state changed. Reason: New leader 23b75580e2dc495ab6b5273d2694976b. Latest consensus state: current_term: 1 leader_uuid: "23b75580e2dc495ab6b5273d2694976b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "23b75580e2dc495ab6b5273d2694976b" member_type: VOTER } }
I20251025 14:09:06.308090 1559 sys_catalog.cc:458] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:06.308101 1560 sys_catalog.cc:458] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:06.308444 1563 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:09:06.308601 1563 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:09:06.309201 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:09:06.309234 1563 catalog_manager.cc:1357] Generated new cluster ID: 24fb6d78761c48689326af13754f6c17
I20251025 14:09:06.309267 1563 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:09:06.315878 1563 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:09:06.316264 1563 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:09:06.323122 1563 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: Generated new TSK 0
I20251025 14:09:06.323199 1563 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:09:06.324811 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:06.325930 1583 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:06.325937 1582 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:06.326001 31499 server_base.cc:1047] running on GCE node
W20251025 14:09:06.326085 1585 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:06.326344 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:06.326376 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:06.326388 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401346326389 us; error 0 us; skew 500 ppm
I20251025 14:09:06.327510 31499 webserver.cc:492] Webserver started at http://127.30.194.193:33837/ using document root <none> and password file <none>
I20251025 14:09:06.327688 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:06.327755 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:09:06.327802 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:09:06.328353 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "8b781a0fd97848e19e621599588e1e4e"
format_stamp: "Formatted at 2025-10-25 14:09:06 on dist-test-slave-v4l2"
I20251025 14:09:06.328408 1509 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60284:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:06.329478 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.002s
I20251025 14:09:06.329933 1590 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:06.330078 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:06.330129 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "8b781a0fd97848e19e621599588e1e4e"
format_stamp: "Formatted at 2025-10-25 14:09:06 on dist-test-slave-v4l2"
I20251025 14:09:06.330164 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:06.366048 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:06.366240 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:06.366545 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:09:06.366569 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:06.366590 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:09:06.366604 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:06.369846 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:42407
I20251025 14:09:06.369952 1655 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:42407 every 8 connection(s)
I20251025 14:09:06.370397 1656 heartbeater.cc:344] Connected to a master server at 127.30.194.254:40799
I20251025 14:09:06.370465 1656 heartbeater.cc:461] Registering TS with master...
I20251025 14:09:06.370558 1656 heartbeater.cc:507] Master 127.30.194.254:40799 requested a full tablet report, sending...
I20251025 14:09:06.370785 1508 ts_manager.cc:194] Registered new tserver with Master: 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407)
I20251025 14:09:06.371259 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.00120799s
I20251025 14:09:06.371461 1508 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:60300
I20251025 14:09:07.333952 1508 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60308:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:07.338325 1620 tablet_service.cc:1505] Processing CreateTablet for tablet c04bc36d356b4746bae843dbae8fee5e (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=059f87a504c44d02a2eeb222f6332527]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:09:07.338469 1620 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c04bc36d356b4746bae843dbae8fee5e. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:07.339706 1676 tablet_bootstrap.cc:492] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:07.340025 1676 tablet_bootstrap.cc:654] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:07.340584 1676 tablet_bootstrap.cc:492] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: No bootstrap required, opened a new log
I20251025 14:09:07.340629 1676 ts_tablet_manager.cc:1403] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:07.340754 1676 raft_consensus.cc:359] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.340801 1676 raft_consensus.cc:385] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:07.340814 1676 raft_consensus.cc:740] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:07.340849 1676 consensus_queue.cc:260] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.340902 1676 raft_consensus.cc:399] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:07.340925 1676 raft_consensus.cc:493] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:07.340945 1676 raft_consensus.cc:3060] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:07.341544 1676 raft_consensus.cc:515] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.341609 1676 leader_election.cc:304] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:07.341696 1676 leader_election.cc:290] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:07.341740 1678 raft_consensus.cc:2804] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:07.341811 1676 ts_tablet_manager.cc:1434] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.341846 1656 heartbeater.cc:499] Master 127.30.194.254:40799 was elected leader, sending a full tablet report...
I20251025 14:09:07.341898 1678 raft_consensus.cc:697] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:07.341944 1678 consensus_queue.cc:237] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.342136 1679 tablet_replica.cc:442] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } } }
I20251025 14:09:07.342202 1679 tablet_replica.cc:445] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:07.342190 1680 tablet_replica.cc:442] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: TxnStatusTablet state changed. Reason: New leader 8b781a0fd97848e19e621599588e1e4e. Latest consensus state: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } } }
I20251025 14:09:07.342285 1680 tablet_replica.cc:445] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:07.342406 1508 catalog_manager.cc:5649] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b781a0fd97848e19e621599588e1e4e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:07.342438 1682 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:09:07.342530 1682 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:09:07.404651 31499 test_util.cc:276] Using random seed: 899576681
I20251025 14:09:07.408700 1508 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60318:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:09:07.410162 1620 tablet_service.cc:1505] Processing CreateTablet for tablet fde7a2ed767b42ae9a04c835da2744c4 (DEFAULT_TABLE table=test-workload [id=10c040cfaa6c4493a8bd554b576452cf]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:07.410189 1619 tablet_service.cc:1505] Processing CreateTablet for tablet d87cc678173a407996d5dbf6b60adaba (DEFAULT_TABLE table=test-workload [id=10c040cfaa6c4493a8bd554b576452cf]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:07.410279 1620 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fde7a2ed767b42ae9a04c835da2744c4. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:07.410378 1619 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d87cc678173a407996d5dbf6b60adaba. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:07.411288 1676 tablet_bootstrap.cc:492] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:07.411706 1676 tablet_bootstrap.cc:654] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:07.412195 1676 tablet_bootstrap.cc:492] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: No bootstrap required, opened a new log
I20251025 14:09:07.412235 1676 ts_tablet_manager.cc:1403] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.412375 1676 raft_consensus.cc:359] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.412421 1676 raft_consensus.cc:385] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:07.412436 1676 raft_consensus.cc:740] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:07.412482 1676 consensus_queue.cc:260] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.412530 1676 raft_consensus.cc:399] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:07.412545 1676 raft_consensus.cc:493] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:07.412575 1676 raft_consensus.cc:3060] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:07.413082 1676 raft_consensus.cc:515] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.413138 1676 leader_election.cc:304] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:07.413177 1676 leader_election.cc:290] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:07.413210 1678 raft_consensus.cc:2804] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:07.413246 1676 ts_tablet_manager.cc:1434] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.413277 1678 raft_consensus.cc:697] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:07.413295 1676 tablet_bootstrap.cc:492] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:07.413331 1678 consensus_queue.cc:237] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.413686 1676 tablet_bootstrap.cc:654] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:07.413808 1508 catalog_manager.cc:5649] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b781a0fd97848e19e621599588e1e4e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:07.414196 1676 tablet_bootstrap.cc:492] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: No bootstrap required, opened a new log
I20251025 14:09:07.414242 1676 ts_tablet_manager.cc:1403] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.414381 1676 raft_consensus.cc:359] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.414414 1676 raft_consensus.cc:385] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:07.414427 1676 raft_consensus.cc:740] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:07.414458 1676 consensus_queue.cc:260] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.414479 1676 raft_consensus.cc:399] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:07.414491 1676 raft_consensus.cc:493] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:07.414506 1676 raft_consensus.cc:3060] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:07.414901 1676 raft_consensus.cc:515] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.414939 1676 leader_election.cc:304] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:07.414973 1676 leader_election.cc:290] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:07.414996 1680 raft_consensus.cc:2804] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:07.415030 1676 ts_tablet_manager.cc:1434] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:07.415042 1680 raft_consensus.cc:697] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:07.415082 1680 consensus_queue.cc:237] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.415439 1508 catalog_manager.cc:5649] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b781a0fd97848e19e621599588e1e4e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:07.470433 31499 test_util.cc:276] Using random seed: 899642454
I20251025 14:09:07.474931 1508 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60322:
name: "default.second_table"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\004\001\000\377\377\377?""\004\001\000\377\377\377?"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20251025 14:09:07.476152 1619 tablet_service.cc:1505] Processing CreateTablet for tablet 1903fc6353994f2fb22faf73d634c97f (DEFAULT_TABLE table=default.second_table [id=d9afc6f97b8d4626a81fcf842938323d]), partition=RANGE (key) PARTITION VALUES < 1073741823
I20251025 14:09:07.476150 1620 tablet_service.cc:1505] Processing CreateTablet for tablet b74f7644b6a84e84800f2178f8915401 (DEFAULT_TABLE table=default.second_table [id=d9afc6f97b8d4626a81fcf842938323d]), partition=RANGE (key) PARTITION 1073741823 <= VALUES
I20251025 14:09:07.476303 1619 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1903fc6353994f2fb22faf73d634c97f. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:07.476370 1620 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b74f7644b6a84e84800f2178f8915401. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:07.477345 1676 tablet_bootstrap.cc:492] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:07.477751 1676 tablet_bootstrap.cc:654] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:07.478251 1676 tablet_bootstrap.cc:492] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: No bootstrap required, opened a new log
I20251025 14:09:07.478300 1676 ts_tablet_manager.cc:1403] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.478426 1676 raft_consensus.cc:359] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.478475 1676 raft_consensus.cc:385] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:07.478490 1676 raft_consensus.cc:740] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:07.478544 1676 consensus_queue.cc:260] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.478588 1676 raft_consensus.cc:399] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:07.478617 1676 raft_consensus.cc:493] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:07.478645 1676 raft_consensus.cc:3060] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:07.479269 1676 raft_consensus.cc:515] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.479338 1676 leader_election.cc:304] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:07.479385 1676 leader_election.cc:290] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:07.479458 1676 ts_tablet_manager.cc:1434] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:07.479565 1676 tablet_bootstrap.cc:492] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:07.479447 1680 raft_consensus.cc:2804] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:07.479751 1680 raft_consensus.cc:697] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:07.479801 1680 consensus_queue.cc:237] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.480054 1676 tablet_bootstrap.cc:654] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:07.480244 1508 catalog_manager.cc:5649] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b781a0fd97848e19e621599588e1e4e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:07.480923 1676 tablet_bootstrap.cc:492] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: No bootstrap required, opened a new log
I20251025 14:09:07.480978 1676 ts_tablet_manager.cc:1403] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.481139 1676 raft_consensus.cc:359] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.481184 1676 raft_consensus.cc:385] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:07.481205 1676 raft_consensus.cc:740] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:07.481268 1676 consensus_queue.cc:260] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.481318 1676 raft_consensus.cc:399] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:07.481339 1676 raft_consensus.cc:493] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:07.481365 1676 raft_consensus.cc:3060] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:07.481957 1676 raft_consensus.cc:515] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.482013 1676 leader_election.cc:304] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:07.482044 1676 leader_election.cc:290] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:07.482139 1680 raft_consensus.cc:2804] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:07.482146 1676 ts_tablet_manager.cc:1434] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:07.482198 1680 raft_consensus.cc:697] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:07.482239 1680 consensus_queue.cc:237] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:07.482676 1508 catalog_manager.cc:5649] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b781a0fd97848e19e621599588e1e4e (127.30.194.193). New cstate: current_term: 1 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:07.566215 1508 catalog_manager.cc:2507] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:60316:
table { table_name: "default.second_table" } modify_external_catalogs: true
I20251025 14:09:07.566314 1508 catalog_manager.cc:2755] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:60316:
table { table_name: "default.second_table" } modify_external_catalogs: true
I20251025 14:09:07.566881 1508 catalog_manager.cc:5936] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: Sending DeleteTablet for 1 replicas of tablet 1903fc6353994f2fb22faf73d634c97f
I20251025 14:09:07.566954 1508 catalog_manager.cc:5936] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: Sending DeleteTablet for 1 replicas of tablet b74f7644b6a84e84800f2178f8915401
I20251025 14:09:07.567188 1619 tablet_service.cc:1552] Processing DeleteTablet for tablet 1903fc6353994f2fb22faf73d634c97f with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:09:07 UTC) from {username='slave'} at 127.0.0.1:53604
I20251025 14:09:07.567217 1620 tablet_service.cc:1552] Processing DeleteTablet for tablet b74f7644b6a84e84800f2178f8915401 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-10-25 14:09:07 UTC) from {username='slave'} at 127.0.0.1:53604
I20251025 14:09:07.567409 1732 tablet_replica.cc:333] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: stopping tablet replica
I20251025 14:09:07.567469 1732 raft_consensus.cc:2243] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:07.567526 1732 raft_consensus.cc:2272] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:07.567898 1732 ts_tablet_manager.cc:1916] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Deleting tablet data with delete state TABLET_DATA_DELETED
W20251025 14:09:07.568426 1661 meta_cache.cc:788] Not found: LookupRpcById { tablet: '1903fc6353994f2fb22faf73d634c97f', attempt: 1 } failed
I20251025 14:09:07.568454 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:09:07.568480 1661 txn_status_manager.cc:244] Participant 1903fc6353994f2fb22faf73d634c97f of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '1903fc6353994f2fb22faf73d634c97f', attempt: 1 } failed
I20251025 14:09:07.569605 1732 ts_tablet_manager.cc:1929] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.107
I20251025 14:09:07.569675 1732 log.cc:1199] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/1903fc6353994f2fb22faf73d634c97f
I20251025 14:09:07.569903 1732 ts_tablet_manager.cc:1950] T 1903fc6353994f2fb22faf73d634c97f P 8b781a0fd97848e19e621599588e1e4e: Deleting consensus metadata
I20251025 14:09:07.570145 1732 tablet_replica.cc:333] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: stopping tablet replica
I20251025 14:09:07.570206 1732 raft_consensus.cc:2243] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:07.570221 1494 catalog_manager.cc:4985] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): tablet 1903fc6353994f2fb22faf73d634c97f (table default.second_table [id=d9afc6f97b8d4626a81fcf842938323d]) successfully deleted
I20251025 14:09:07.570250 1732 raft_consensus.cc:2272] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:07.572172 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:09:07.572175 1732 ts_tablet_manager.cc:1916] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Deleting tablet data with delete state TABLET_DATA_DELETED
I20251025 14:09:07.573385 1732 ts_tablet_manager.cc:1929] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.96
I20251025 14:09:07.573429 1732 log.cc:1199] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Deleting WAL directory at /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root/wals/b74f7644b6a84e84800f2178f8915401
I20251025 14:09:07.573645 1732 ts_tablet_manager.cc:1950] T b74f7644b6a84e84800f2178f8915401 P 8b781a0fd97848e19e621599588e1e4e: Deleting consensus metadata
W20251025 14:09:07.573889 1494 catalog_manager.cc:4977] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): delete failed for tablet b74f7644b6a84e84800f2178f8915401 with error code TABLET_NOT_RUNNING: Service unavailable: Tablet Manager is not running: MANAGER_QUIESCING
I20251025 14:09:07.573998 31499 tablet_replica.cc:333] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: stopping tablet replica
I20251025 14:09:07.574061 31499 raft_consensus.cc:2243] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:07.574105 31499 raft_consensus.cc:2272] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:07.574432 31499 tablet_replica.cc:333] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: stopping tablet replica
I20251025 14:09:07.574472 31499 raft_consensus.cc:2243] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:07.574501 31499 raft_consensus.cc:2272] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:07.624907 31499 txn_status_manager.cc:765] Waiting for 1 task(s) to stop
W20251025 14:09:07.638497 1494 proxy.cc:239] Call had error, refreshing address and retrying: Remote error: Service unavailable: service kudu.tserver.TabletServerAdminService not registered on TabletServer [suppressed 1 similar messages]
W20251025 14:09:07.638947 1494 catalog_manager.cc:4712] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet b74f7644b6a84e84800f2178f8915401: Remote error: Service unavailable: service kudu.tserver.TabletServerAdminService not registered on TabletServer
W20251025 14:09:07.766085 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 8) took 74 ms (client timeout 74 ms). Trace:
W20251025 14:09:07.766168 1539 rpcz_store.cc:269] 1025 14:09:07.691352 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:07.691383 (+ 31us) service_pool.cc:225] Handling call
1025 14:09:07.766070 (+ 74687us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:07.853147 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 9) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:09:07.853237 1539 rpcz_store.cc:269] 1025 14:09:07.767607 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:07.767648 (+ 41us) service_pool.cc:225] Handling call
1025 14:09:07.853133 (+ 85485us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:07.927547 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 10) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:09:07.927632 1538 rpcz_store.cc:269] 1025 14:09:07.844095 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:07.844150 (+ 55us) service_pool.cc:225] Handling call
1025 14:09:07.927528 (+ 83378us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.001758 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 11) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.001858 1539 rpcz_store.cc:269] 1025 14:09:07.920840 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:07.920869 (+ 29us) service_pool.cc:225] Handling call
1025 14:09:08.001728 (+ 80859us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.075819 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 12) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.075894 1538 rpcz_store.cc:269] 1025 14:09:07.997240 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:07.997286 (+ 46us) service_pool.cc:225] Handling call
1025 14:09:08.075807 (+ 78521us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.154196 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 13) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.154274 1539 rpcz_store.cc:269] 1025 14:09:08.074340 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.074380 (+ 40us) service_pool.cc:225] Handling call
1025 14:09:08.154182 (+ 79802us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.228276 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 14) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.228377 1538 rpcz_store.cc:269] 1025 14:09:08.151640 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.151685 (+ 45us) service_pool.cc:225] Handling call
1025 14:09:08.228261 (+ 76576us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.303129 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 15) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.303205 1539 rpcz_store.cc:269] 1025 14:09:08.228096 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.228193 (+ 97us) service_pool.cc:225] Handling call
1025 14:09:08.303115 (+ 74922us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.391307 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 16) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.391388 1539 rpcz_store.cc:269] 1025 14:09:08.304594 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.304627 (+ 33us) service_pool.cc:225] Handling call
1025 14:09:08.391291 (+ 86664us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.458020 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 17) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.458124 1538 rpcz_store.cc:269] 1025 14:09:08.381022 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.381091 (+ 69us) service_pool.cc:225] Handling call
1025 14:09:08.458001 (+ 76910us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.536218 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 18) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.536286 1539 rpcz_store.cc:269] 1025 14:09:08.458104 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.458161 (+ 57us) service_pool.cc:225] Handling call
1025 14:09:08.536207 (+ 78046us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.611315 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 19) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.611399 1538 rpcz_store.cc:269] 1025 14:09:08.534764 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.534822 (+ 58us) service_pool.cc:225] Handling call
1025 14:09:08.611299 (+ 76477us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.690820 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 20) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.690901 1538 rpcz_store.cc:269] 1025 14:09:08.612799 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.612833 (+ 34us) service_pool.cc:225] Handling call
1025 14:09:08.690802 (+ 77969us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.734843 1494 catalog_manager.cc:4712] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet b74f7644b6a84e84800f2178f8915401: Remote error: Service unavailable: service kudu.tserver.TabletServerAdminService not registered on TabletServer
W20251025 14:09:08.772571 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 21) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.772650 1539 rpcz_store.cc:269] 1025 14:09:08.689234 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.689301 (+ 67us) service_pool.cc:225] Handling call
1025 14:09:08.772560 (+ 83259us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.846781 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 22) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.846873 1538 rpcz_store.cc:269] 1025 14:09:08.765563 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.765607 (+ 44us) service_pool.cc:225] Handling call
1025 14:09:08.846769 (+ 81162us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.922116 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 23) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.922199 1539 rpcz_store.cc:269] 1025 14:09:08.842436 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.842945 (+ 509us) service_pool.cc:225] Handling call
1025 14:09:08.922104 (+ 79159us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:08.995419 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 24) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:09:08.995501 1538 rpcz_store.cc:269] 1025 14:09:08.919538 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.919601 (+ 63us) service_pool.cc:225] Handling call
1025 14:09:08.995406 (+ 75805us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.082446 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 25) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.082536 1538 rpcz_store.cc:269] 1025 14:09:08.995800 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:08.995834 (+ 34us) service_pool.cc:225] Handling call
1025 14:09:09.082431 (+ 86597us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.148819 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 26) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.148903 1539 rpcz_store.cc:269] 1025 14:09:09.072197 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.072234 (+ 37us) service_pool.cc:225] Handling call
1025 14:09:09.148803 (+ 76569us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.224447 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 27) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.224529 1539 rpcz_store.cc:269] 1025 14:09:09.149125 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.149146 (+ 21us) service_pool.cc:225] Handling call
1025 14:09:09.224430 (+ 75284us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.311218 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 28) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.311295 1539 rpcz_store.cc:269] 1025 14:09:09.226041 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.226097 (+ 56us) service_pool.cc:225] Handling call
1025 14:09:09.311205 (+ 85108us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.388701 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 29) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.388804 1538 rpcz_store.cc:269] 1025 14:09:09.302272 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.302320 (+ 48us) service_pool.cc:225] Handling call
1025 14:09:09.388688 (+ 86368us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.460316 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 30) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.460403 1539 rpcz_store.cc:269] 1025 14:09:09.378838 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.378879 (+ 41us) service_pool.cc:225] Handling call
1025 14:09:09.460302 (+ 81423us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.534603 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 31) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.534686 1538 rpcz_store.cc:269] 1025 14:09:09.455638 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.455702 (+ 64us) service_pool.cc:225] Handling call
1025 14:09:09.534590 (+ 78888us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.614521 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 32) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.614601 1539 rpcz_store.cc:269] 1025 14:09:09.532052 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.532090 (+ 38us) service_pool.cc:225] Handling call
1025 14:09:09.614509 (+ 82419us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.694483 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 33) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.694569 1538 rpcz_store.cc:269] 1025 14:09:09.608335 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.608372 (+ 37us) service_pool.cc:225] Handling call
1025 14:09:09.694471 (+ 86099us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.762869 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 34) took 77 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.762948 1539 rpcz_store.cc:269] 1025 14:09:09.685257 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.685320 (+ 63us) service_pool.cc:225] Handling call
1025 14:09:09.762857 (+ 77537us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.781669 1494 catalog_manager.cc:4712] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet b74f7644b6a84e84800f2178f8915401: Remote error: Service unavailable: service kudu.tserver.TabletServerAdminService not registered on TabletServer
W20251025 14:09:09.844758 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 35) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.844838 1538 rpcz_store.cc:269] 1025 14:09:09.762340 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.762408 (+ 68us) service_pool.cc:225] Handling call
1025 14:09:09.844745 (+ 82337us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.919385 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 36) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.919505 1539 rpcz_store.cc:269] 1025 14:09:09.838625 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.838676 (+ 51us) service_pool.cc:225] Handling call
1025 14:09:09.919366 (+ 80690us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:09.997774 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 37) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:09:09.997880 1538 rpcz_store.cc:269] 1025 14:09:09.915639 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.915709 (+ 70us) service_pool.cc:225] Handling call
1025 14:09:09.997760 (+ 82051us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.074608 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 38) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.074687 1539 rpcz_store.cc:269] 1025 14:09:09.992138 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:09.992207 (+ 69us) service_pool.cc:225] Handling call
1025 14:09:10.074596 (+ 82389us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.147085 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 39) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.147184 1538 rpcz_store.cc:269] 1025 14:09:10.068579 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.068653 (+ 74us) service_pool.cc:225] Handling call
1025 14:09:10.147071 (+ 78418us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.225575 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 40) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.225675 1539 rpcz_store.cc:269] 1025 14:09:10.145430 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.145508 (+ 78us) service_pool.cc:225] Handling call
1025 14:09:10.225558 (+ 80050us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.301801 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 41) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.301879 1538 rpcz_store.cc:269] 1025 14:09:10.221964 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.222014 (+ 50us) service_pool.cc:225] Handling call
1025 14:09:10.301787 (+ 79773us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.381491 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 42) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.381572 1539 rpcz_store.cc:269] 1025 14:09:10.298224 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.298274 (+ 50us) service_pool.cc:225] Handling call
1025 14:09:10.381477 (+ 83203us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.459578 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 43) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.459667 1538 rpcz_store.cc:269] 1025 14:09:10.375469 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.375555 (+ 86us) service_pool.cc:225] Handling call
1025 14:09:10.459565 (+ 84010us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.539526 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 44) took 87 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.539610 1539 rpcz_store.cc:269] 1025 14:09:10.452358 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.452444 (+ 86us) service_pool.cc:225] Handling call
1025 14:09:10.539513 (+ 87069us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.608023 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 45) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.608103 1538 rpcz_store.cc:269] 1025 14:09:10.529454 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.529530 (+ 76us) service_pool.cc:225] Handling call
1025 14:09:10.608010 (+ 78480us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.686378 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 46) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.686475 1539 rpcz_store.cc:269] 1025 14:09:10.606335 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.606385 (+ 50us) service_pool.cc:225] Handling call
1025 14:09:10.686363 (+ 79978us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.758005 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 47) took 75 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.758086 1538 rpcz_store.cc:269] 1025 14:09:10.682750 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.682826 (+ 76us) service_pool.cc:225] Handling call
1025 14:09:10.757987 (+ 75161us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.837723 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 48) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.837823 1538 rpcz_store.cc:269] 1025 14:09:10.759585 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.759630 (+ 45us) service_pool.cc:225] Handling call
1025 14:09:10.837709 (+ 78079us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.917186 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 49) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.917275 1539 rpcz_store.cc:269] 1025 14:09:10.836104 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.836175 (+ 71us) service_pool.cc:225] Handling call
1025 14:09:10.917172 (+ 80997us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:10.992604 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 50) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:10.992686 1538 rpcz_store.cc:269] 1025 14:09:10.912521 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.912565 (+ 44us) service_pool.cc:225] Handling call
1025 14:09:10.992588 (+ 80023us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.067059 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 51) took 78 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.067144 1539 rpcz_store.cc:269] 1025 14:09:10.988980 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:10.989069 (+ 89us) service_pool.cc:225] Handling call
1025 14:09:11.067043 (+ 77974us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.146911 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 52) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.146992 1538 rpcz_store.cc:269] 1025 14:09:11.065394 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.065542 (+ 148us) service_pool.cc:225] Handling call
1025 14:09:11.146899 (+ 81357us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.221443 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 53) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.221515 1539 rpcz_store.cc:269] 1025 14:09:11.141913 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.141985 (+ 72us) service_pool.cc:225] Handling call
1025 14:09:11.221431 (+ 79446us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.298722 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 54) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.298813 1538 rpcz_store.cc:269] 1025 14:09:11.218817 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.218895 (+ 78us) service_pool.cc:225] Handling call
1025 14:09:11.298706 (+ 79811us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.376107 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 55) took 81 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.376191 1539 rpcz_store.cc:269] 1025 14:09:11.295105 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.295165 (+ 60us) service_pool.cc:225] Handling call
1025 14:09:11.376091 (+ 80926us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.456001 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 56) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.456094 1538 rpcz_store.cc:269] 1025 14:09:11.371495 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.371561 (+ 66us) service_pool.cc:225] Handling call
1025 14:09:11.455985 (+ 84424us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.531877 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 57) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.531957 1539 rpcz_store.cc:269] 1025 14:09:11.447937 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.447999 (+ 62us) service_pool.cc:225] Handling call
1025 14:09:11.531865 (+ 83866us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.605558 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 58) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.605636 1538 rpcz_store.cc:269] 1025 14:09:11.524699 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.524753 (+ 54us) service_pool.cc:225] Handling call
1025 14:09:11.605546 (+ 80793us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.678397 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 59) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.678478 1539 rpcz_store.cc:269] 1025 14:09:11.601633 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.601675 (+ 42us) service_pool.cc:225] Handling call
1025 14:09:11.678382 (+ 76707us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.755232 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 60) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.755313 1538 rpcz_store.cc:269] 1025 14:09:11.678269 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.678309 (+ 40us) service_pool.cc:225] Handling call
1025 14:09:11.755218 (+ 76909us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.834635 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 61) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.834718 1539 rpcz_store.cc:269] 1025 14:09:11.755218 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.755264 (+ 46us) service_pool.cc:225] Handling call
1025 14:09:11.834623 (+ 79359us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.861702 1494 catalog_manager.cc:4712] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet b74f7644b6a84e84800f2178f8915401: Remote error: Service unavailable: service kudu.tserver.TabletServerAdminService not registered on TabletServer
W20251025 14:09:11.914389 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 62) took 82 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.914471 1538 rpcz_store.cc:269] 1025 14:09:11.832024 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.832093 (+ 69us) service_pool.cc:225] Handling call
1025 14:09:11.914377 (+ 82284us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:11.984450 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 63) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:11.984536 1539 rpcz_store.cc:269] 1025 14:09:11.908394 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.908473 (+ 79us) service_pool.cc:225] Handling call
1025 14:09:11.984434 (+ 75961us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.070766 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 64) took 84 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.070885 1539 rpcz_store.cc:269] 1025 14:09:11.985982 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:11.986015 (+ 33us) service_pool.cc:225] Handling call
1025 14:09:12.070753 (+ 84738us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.147893 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 65) took 85 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.147976 1538 rpcz_store.cc:269] 1025 14:09:12.062454 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.062533 (+ 79us) service_pool.cc:225] Handling call
1025 14:09:12.147880 (+ 85347us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.215798 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 66) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.215881 1539 rpcz_store.cc:269] 1025 14:09:12.139618 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.139663 (+ 45us) service_pool.cc:225] Handling call
1025 14:09:12.215783 (+ 76120us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.298233 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 67) took 80 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.298316 1539 rpcz_store.cc:269] 1025 14:09:12.217281 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.217327 (+ 46us) service_pool.cc:225] Handling call
1025 14:09:12.298220 (+ 80893us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.370371 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 68) took 76 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.370455 1538 rpcz_store.cc:269] 1025 14:09:12.293672 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.293744 (+ 72us) service_pool.cc:225] Handling call
1025 14:09:12.370355 (+ 76611us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.453482 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 69) took 83 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.453576 1539 rpcz_store.cc:269] 1025 14:09:12.369976 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.370040 (+ 64us) service_pool.cc:225] Handling call
1025 14:09:12.453466 (+ 83426us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.533653 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 70) took 86 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.533736 1538 rpcz_store.cc:269] 1025 14:09:12.447105 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.447146 (+ 41us) service_pool.cc:225] Handling call
1025 14:09:12.533638 (+ 86492us) inbound_call.cc:173] Queueing success response
Metrics: {}
W20251025 14:09:12.603905 1539 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 71) took 79 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.603981 1539 rpcz_store.cc:269] 1025 14:09:12.524351 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.524398 (+ 47us) service_pool.cc:225] Handling call
1025 14:09:12.603891 (+ 79493us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:09:12.638190 31499 tablet_replica.cc:333] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: stopping tablet replica
I20251025 14:09:12.638307 31499 raft_consensus.cc:2243] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:12.638384 31499 raft_consensus.cc:2272] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Raft consensus is shut down!
W20251025 14:09:12.641417 1665 meta_cache.cc:302] tablet c04bc36d356b4746bae843dbae8fee5e: replica 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407) has failed: Network error: Client connection negotiation failed: client connection to 127.30.194.193:42407: connect: Connection refused (error 111)
I20251025 14:09:12.652835 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:09:12.654513 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:12.655582 1738 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:12.655627 1739 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:12.655759 1741 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:12.655872 31499 server_base.cc:1047] running on GCE node
I20251025 14:09:12.655941 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:12.655957 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:12.655970 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401352655970 us; error 0 us; skew 500 ppm
I20251025 14:09:12.656541 31499 webserver.cc:492] Webserver started at http://127.30.194.193:33837/ using document root <none> and password file <none>
I20251025 14:09:12.656625 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:12.656665 31499 fs_manager.cc:365] Using existing metadata directory in first data directory
I20251025 14:09:12.657243 31499 fs_manager.cc:714] Time spent opening directory manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:09:12.657924 1746 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:12.658057 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:09:12.658105 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "8b781a0fd97848e19e621599588e1e4e"
format_stamp: "Formatted at 2025-10-25 14:09:06 on dist-test-slave-v4l2"
I20251025 14:09:12.658161 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestRestartingWhileCommittingAndDeleting.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:12.664422 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:12.664577 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:12.664965 1754 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20251025 14:09:12.666544 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20251025 14:09:12.666599 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.002s user 0.000s sys 0.000s
I20251025 14:09:12.666635 31499 ts_tablet_manager.cc:600] Registering tablets (0/3 complete)
I20251025 14:09:12.667171 1754 tablet_bootstrap.cc:492] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:12.667809 31499 ts_tablet_manager.cc:616] Registered 3 tablets
I20251025 14:09:12.667860 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:12.671320 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:42407
I20251025 14:09:12.671337 1814 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:42407 every 8 connection(s)
I20251025 14:09:12.673197 1815 heartbeater.cc:344] Connected to a master server at 127.30.194.254:40799
I20251025 14:09:12.673262 1815 heartbeater.cc:461] Registering TS with master...
I20251025 14:09:12.673367 1815 heartbeater.cc:507] Master 127.30.194.254:40799 requested a full tablet report, sending...
I20251025 14:09:12.673607 1508 ts_manager.cc:194] Re-registered known tserver with Master: 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407)
I20251025 14:09:12.673949 1508 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:60332
I20251025 14:09:12.682741 1754 tablet_bootstrap.cc:492] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Bootstrap replayed 1/1 log segments. Stats: ops{read=106 overwritten=0 applied=106 ignored=0} inserts{seen=2521 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:12.682966 1754 tablet_bootstrap.cc:492] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Bootstrap complete.
I20251025 14:09:12.683068 1754 ts_tablet_manager.cc:1403] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.016s user 0.009s sys 0.004s
I20251025 14:09:12.683157 1754 raft_consensus.cc:359] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.683204 1754 raft_consensus.cc:740] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:12.683238 1754 consensus_queue.cc:260] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 106, Last appended: 1.106, Last appended by leader: 106, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.683264 1754 raft_consensus.cc:399] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:12.683283 1754 raft_consensus.cc:493] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:12.683301 1754 raft_consensus.cc:3060] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:12.683801 1754 raft_consensus.cc:515] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.683854 1754 leader_election.cc:304] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:12.683940 1754 leader_election.cc:290] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:12.684011 1823 raft_consensus.cc:2804] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:12.684038 1754 ts_tablet_manager.cc:1434] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.002s sys 0.001s
I20251025 14:09:12.684137 1815 heartbeater.cc:499] Master 127.30.194.254:40799 was elected leader, sending a full tablet report...
I20251025 14:09:12.684159 1823 raft_consensus.cc:697] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 2 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:12.684218 1823 consensus_queue.cc:237] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 106, Committed index: 106, Last appended: 1.106, Last appended by leader: 106, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.684155 1754 tablet_bootstrap.cc:492] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:12.684696 1508 catalog_manager.cc:5649] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
W20251025 14:09:12.689849 1538 rpcz_store.cc:267] Call kudu.transactions.TxnManagerService.KeepTransactionAlive from 127.0.0.1:60316 (request call id 72) took 88 ms (client timeout 74 ms). Trace:
W20251025 14:09:12.689918 1538 rpcz_store.cc:269] 1025 14:09:12.601371 (+ 0us) service_pool.cc:168] Inserting onto call queue
1025 14:09:12.601414 (+ 43us) service_pool.cc:225] Handling call
1025 14:09:12.689842 (+ 88428us) inbound_call.cc:173] Queueing success response
Metrics: {}
I20251025 14:09:12.695230 1754 tablet_bootstrap.cc:492] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Bootstrap replayed 1/1 log segments. Stats: ops{read=108 overwritten=0 applied=108 ignored=0} inserts{seen=2489 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:12.695504 1754 tablet_bootstrap.cc:492] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Bootstrap complete.
I20251025 14:09:12.695631 1754 ts_tablet_manager.cc:1403] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.011s user 0.006s sys 0.003s
I20251025 14:09:12.695746 1754 raft_consensus.cc:359] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.695792 1754 raft_consensus.cc:740] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:12.695860 1754 consensus_queue.cc:260] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 108, Last appended: 1.108, Last appended by leader: 108, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.695904 1754 raft_consensus.cc:399] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:12.695947 1754 raft_consensus.cc:493] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:12.695977 1754 raft_consensus.cc:3060] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:12.696513 1754 raft_consensus.cc:515] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.696571 1754 leader_election.cc:304] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:12.696633 1823 raft_consensus.cc:2804] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:12.696674 1823 raft_consensus.cc:697] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 2 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:12.696712 1823 consensus_queue.cc:237] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 108, Committed index: 108, Last appended: 1.108, Last appended by leader: 108, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.696875 1754 leader_election.cc:290] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:12.697032 1754 ts_tablet_manager.cc:1434] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.001s user 0.002s sys 0.001s
I20251025 14:09:12.697176 1754 tablet_bootstrap.cc:492] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Bootstrap starting.
I20251025 14:09:12.697149 1508 catalog_manager.cc:5649] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:12.699501 1754 tablet_bootstrap.cc:492] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=4 ignored=0} mutations{seen=1 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:12.699827 1754 tablet_bootstrap.cc:492] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Bootstrap complete.
I20251025 14:09:12.699923 1754 ts_tablet_manager.cc:1403] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Time spent bootstrapping tablet: real 0.003s user 0.001s sys 0.000s
I20251025 14:09:12.700026 1754 raft_consensus.cc:359] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.700114 1754 raft_consensus.cc:740] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Initialized, Role: FOLLOWER
I20251025 14:09:12.700182 1754 consensus_queue.cc:260] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.700227 1754 raft_consensus.cc:399] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:12.700269 1754 raft_consensus.cc:493] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:12.700299 1754 raft_consensus.cc:3060] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:12.700821 1754 raft_consensus.cc:515] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.700878 1754 leader_election.cc:304] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8b781a0fd97848e19e621599588e1e4e; no voters:
I20251025 14:09:12.700939 1823 raft_consensus.cc:2804] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:12.700980 1823 raft_consensus.cc:697] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 2 LEADER]: Becoming Leader. State: Replica: 8b781a0fd97848e19e621599588e1e4e, State: Running, Role: LEADER
I20251025 14:09:12.701067 1823 consensus_queue.cc:237] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } }
I20251025 14:09:12.701228 1754 leader_election.cc:290] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:12.701346 1823 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } } }
I20251025 14:09:12.701396 1823 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:12.701414 1508 catalog_manager.cc:5649] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:12.701583 1836 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:09:12.701653 1836 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:09:12.701611 1754 ts_tablet_manager.cc:1434] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e: Time spent starting tablet: real 0.002s user 0.001s sys 0.000s
I20251025 14:09:12.701817 1836 txn_status_manager.cc:716] Starting 1 commit tasks
I20251025 14:09:12.701862 1824 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: New leader 8b781a0fd97848e19e621599588e1e4e. Latest consensus state: current_term: 2 leader_uuid: "8b781a0fd97848e19e621599588e1e4e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8b781a0fd97848e19e621599588e1e4e" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 42407 } } }
I20251025 14:09:12.701946 1824 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
W20251025 14:09:12.702366 1820 meta_cache.cc:788] Not found: LookupRpcById { tablet: '1903fc6353994f2fb22faf73d634c97f', attempt: 1 } failed
I20251025 14:09:12.702409 1820 txn_status_manager.cc:244] Participant 1903fc6353994f2fb22faf73d634c97f of txn 0 returned error for BEGIN_COMMIT op, aborting: Not found: LookupRpcById { tablet: '1903fc6353994f2fb22faf73d634c97f', attempt: 1 } failed
I20251025 14:09:12.706101 1818 txn_status_manager.cc:206] Scheduling write for ABORT_IN_PROGRESS for txn 0
I20251025 14:09:12.707134 1820 txn_status_manager.cc:337] Participant 1903fc6353994f2fb22faf73d634c97f was not found for ABORT_TXN, proceeding as if op succeeded: Not found: LookupRpcById { tablet: '1903fc6353994f2fb22faf73d634c97f', attempt: 1 } failed
I20251025 14:09:12.710427 31499 tablet_server.cc:178] TabletServer@127.30.194.193:42407 shutting down...
I20251025 14:09:12.714095 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:09:12.714246 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:12.714296 31499 raft_consensus.cc:2243] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:09:12.714332 31499 raft_consensus.cc:2272] T c04bc36d356b4746bae843dbae8fee5e P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:12.714601 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:12.714646 31499 raft_consensus.cc:2243] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:09:12.714676 31499 raft_consensus.cc:2272] T fde7a2ed767b42ae9a04c835da2744c4 P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:12.714903 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:12.714941 31499 raft_consensus.cc:2243] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:09:12.714964 31499 raft_consensus.cc:2272] T d87cc678173a407996d5dbf6b60adaba P 8b781a0fd97848e19e621599588e1e4e [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:12.716902 31499 tablet_server.cc:195] TabletServer@127.30.194.193:42407 shutdown complete.
I20251025 14:09:12.718187 31499 master.cc:561] Master@127.30.194.254:40799 shutting down...
W20251025 14:09:15.986752 1494 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.30.194.193:42407: connect: Connection refused (error 111) [suppressed 7 similar messages]
W20251025 14:09:15.987210 1494 catalog_manager.cc:4712] TS 8b781a0fd97848e19e621599588e1e4e (127.30.194.193:42407): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet b74f7644b6a84e84800f2178f8915401: Network error: Client connection negotiation failed: client connection to 127.30.194.193:42407: connect: Connection refused (error 111)
I20251025 14:09:16.311873 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:16.311991 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:16.312013 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 23b75580e2dc495ab6b5273d2694976b: stopping tablet replica
I20251025 14:09:16.323995 31499 master.cc:583] Master@127.30.194.254:40799 shutdown complete.
[ OK ] TxnCommitITest.TestRestartingWhileCommittingAndDeleting (10037 ms)
[ RUN ] TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters
I20251025 14:09:16.327694 31499 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.30.194.254:42933
I20251025 14:09:16.327843 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:16.328946 1844 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:16.329126 31499 server_base.cc:1047] running on GCE node
W20251025 14:09:16.328980 1845 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:16.329067 1847 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:16.329309 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:16.329345 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:16.329363 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401356329364 us; error 0 us; skew 500 ppm
I20251025 14:09:16.329936 31499 webserver.cc:492] Webserver started at http://127.30.194.254:33795/ using document root <none> and password file <none>
I20251025 14:09:16.330021 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:16.330072 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:09:16.330122 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:09:16.330379 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root/instance:
uuid: "37e89ac31daf4afca583bb8e40a4c9f1"
format_stamp: "Formatted at 2025-10-25 14:09:16 on dist-test-slave-v4l2"
I20251025 14:09:16.331264 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:16.331738 1852 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:16.331869 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:16.331918 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "37e89ac31daf4afca583bb8e40a4c9f1"
format_stamp: "Formatted at 2025-10-25 14:09:16 on dist-test-slave-v4l2"
I20251025 14:09:16.331957 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:16.349453 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:16.349640 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:16.352468 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:42933
I20251025 14:09:16.354048 1914 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:42933 every 8 connection(s)
I20251025 14:09:16.354166 1915 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:16.355154 1915 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Bootstrap starting.
I20251025 14:09:16.355396 1915 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:16.355988 1915 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: No bootstrap required, opened a new log
I20251025 14:09:16.356110 1915 raft_consensus.cc:359] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:16.356161 1915 raft_consensus.cc:385] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:16.356175 1915 raft_consensus.cc:740] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 37e89ac31daf4afca583bb8e40a4c9f1, State: Initialized, Role: FOLLOWER
I20251025 14:09:16.356212 1915 consensus_queue.cc:260] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:16.356232 1915 raft_consensus.cc:399] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:16.356247 1915 raft_consensus.cc:493] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:16.356266 1915 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:16.356784 1915 raft_consensus.cc:515] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:16.356840 1915 leader_election.cc:304] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 37e89ac31daf4afca583bb8e40a4c9f1; no voters:
I20251025 14:09:16.356932 1915 leader_election.cc:290] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:16.357048 1918 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:16.357107 1915 sys_catalog.cc:565] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:09:16.357208 1918 raft_consensus.cc:697] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 LEADER]: Becoming Leader. State: Replica: 37e89ac31daf4afca583bb8e40a4c9f1, State: Running, Role: LEADER
I20251025 14:09:16.357290 1918 consensus_queue.cc:237] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:16.357510 1919 sys_catalog.cc:455] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER } }
I20251025 14:09:16.357542 1920 sys_catalog.cc:455] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 37e89ac31daf4afca583bb8e40a4c9f1. Latest consensus state: current_term: 1 leader_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER } }
I20251025 14:09:16.357579 1919 sys_catalog.cc:458] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:16.357582 1920 sys_catalog.cc:458] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:16.357905 1924 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:09:16.358040 1924 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:09:16.358467 31499 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20251025 14:09:16.358589 1924 catalog_manager.cc:1357] Generated new cluster ID: 17d85c7e293c49dc89d398e0c61bbf42
I20251025 14:09:16.358639 1924 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:09:16.378895 1924 catalog_manager.cc:1380] Generated new certificate authority record
I20251025 14:09:16.379341 1924 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:09:16.386094 1924 catalog_manager.cc:6022] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Generated new TSK 0
I20251025 14:09:16.386176 1924 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20251025 14:09:16.390228 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:16.391474 1942 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:16.391508 1943 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:16.391541 1945 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:16.391701 31499 server_base.cc:1047] running on GCE node
I20251025 14:09:16.391781 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:16.391816 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:16.391844 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401356391843 us; error 0 us; skew 500 ppm
I20251025 14:09:16.392369 31499 webserver.cc:492] Webserver started at http://127.30.194.193:43907/ using document root <none> and password file <none>
I20251025 14:09:16.392444 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:16.392495 31499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20251025 14:09:16.392546 31499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20251025 14:09:16.392807 31499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root/instance:
uuid: "c073aa8b46f043319f11fa30f2ef6079"
format_stamp: "Formatted at 2025-10-25 14:09:16 on dist-test-slave-v4l2"
I20251025 14:09:16.393702 31499 fs_manager.cc:696] Time spent creating directory manager: real 0.001s user 0.000s sys 0.001s
I20251025 14:09:16.394136 1950 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:16.394255 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:16.394307 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "c073aa8b46f043319f11fa30f2ef6079"
format_stamp: "Formatted at 2025-10-25 14:09:16 on dist-test-slave-v4l2"
I20251025 14:09:16.394351 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:16.397464 1869 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48698:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:16.428818 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:16.429060 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:16.429384 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20251025 14:09:16.429412 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:16.429435 31499 ts_tablet_manager.cc:616] Registered 0 tablets
I20251025 14:09:16.429450 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:16.432688 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:43461
I20251025 14:09:16.432766 2015 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:43461 every 8 connection(s)
I20251025 14:09:16.433445 2016 heartbeater.cc:344] Connected to a master server at 127.30.194.254:42933
I20251025 14:09:16.433523 2016 heartbeater.cc:461] Registering TS with master...
I20251025 14:09:16.433640 2016 heartbeater.cc:507] Master 127.30.194.254:42933 requested a full tablet report, sending...
I20251025 14:09:16.433845 1869 ts_manager.cc:194] Registered new tserver with Master: c073aa8b46f043319f11fa30f2ef6079 (127.30.194.193:43461)
I20251025 14:09:16.434062 31499 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.001129486s
I20251025 14:09:16.434652 1869 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:48704
I20251025 14:09:17.402410 1869 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48722:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:17.406387 1980 tablet_service.cc:1505] Processing CreateTablet for tablet 8ab9e39c5edb4dc8adf2562eaa122345 (TXN_STATUS_TABLE table=kudu_system.kudu_transactions [id=4d9e0be4bd7a40a985c0daedd5ce1b8a]), partition=RANGE (txn_id) PARTITION 0 <= VALUES < 1000000
I20251025 14:09:17.406508 1980 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8ab9e39c5edb4dc8adf2562eaa122345. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:17.407642 2036 tablet_bootstrap.cc:492] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap starting.
I20251025 14:09:17.407995 2036 tablet_bootstrap.cc:654] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:17.408478 2036 tablet_bootstrap.cc:492] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: No bootstrap required, opened a new log
I20251025 14:09:17.408521 2036 ts_tablet_manager.cc:1403] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:17.408645 2036 raft_consensus.cc:359] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.408705 2036 raft_consensus.cc:385] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:17.408722 2036 raft_consensus.cc:740] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Initialized, Role: FOLLOWER
I20251025 14:09:17.408782 2036 consensus_queue.cc:260] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.408831 2036 raft_consensus.cc:399] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:17.408849 2036 raft_consensus.cc:493] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:17.408882 2036 raft_consensus.cc:3060] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:17.409493 2036 raft_consensus.cc:515] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.409574 2036 leader_election.cc:304] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c073aa8b46f043319f11fa30f2ef6079; no voters:
I20251025 14:09:17.409693 2036 leader_election.cc:290] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:17.409736 2038 raft_consensus.cc:2804] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:17.409884 2036 ts_tablet_manager.cc:1434] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:17.409911 2016 heartbeater.cc:499] Master 127.30.194.254:42933 was elected leader, sending a full tablet report...
I20251025 14:09:17.409888 2038 raft_consensus.cc:697] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 LEADER]: Becoming Leader. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Running, Role: LEADER
I20251025 14:09:17.410034 2038 consensus_queue.cc:237] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.410250 2039 tablet_replica.cc:442] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } } }
I20251025 14:09:17.410298 2040 tablet_replica.cc:442] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: TxnStatusTablet state changed. Reason: New leader c073aa8b46f043319f11fa30f2ef6079. Latest consensus state: current_term: 1 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } } }
I20251025 14:09:17.410351 2039 tablet_replica.cc:445] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:17.410378 2040 tablet_replica.cc:445] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:17.410590 1869 catalog_manager.cc:5649] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 reported cstate change: term changed from 0 to 1, leader changed from <none> to c073aa8b46f043319f11fa30f2ef6079 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:17.410766 2042 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:09:17.410808 2042 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:09:17.468072 31499 test_util.cc:276] Using random seed: 909640102
I20251025 14:09:17.473608 1869 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48754:
name: "test-workload"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
hash_schema {
columns {
name: "key"
}
num_buckets: 2
seed: 0
}
}
I20251025 14:09:17.475224 1980 tablet_service.cc:1505] Processing CreateTablet for tablet 2b0fbc34f55e444bbd7a7e40b472f6ad (DEFAULT_TABLE table=test-workload [id=8e34de89b2f148ddaa6c0b512301a048]), partition=HASH (key) PARTITION 0, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:17.475265 1979 tablet_service.cc:1505] Processing CreateTablet for tablet 1a1dfe0b8b5f4b3aa62df39997304493 (DEFAULT_TABLE table=test-workload [id=8e34de89b2f148ddaa6c0b512301a048]), partition=HASH (key) PARTITION 1, RANGE (key) PARTITION UNBOUNDED
I20251025 14:09:17.475353 1980 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2b0fbc34f55e444bbd7a7e40b472f6ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:17.475440 1979 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1a1dfe0b8b5f4b3aa62df39997304493. 1 dirs total, 0 dirs full, 0 dirs failed
I20251025 14:09:17.476357 2036 tablet_bootstrap.cc:492] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Bootstrap starting.
I20251025 14:09:17.476692 2036 tablet_bootstrap.cc:654] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:17.477213 2036 tablet_bootstrap.cc:492] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: No bootstrap required, opened a new log
I20251025 14:09:17.477257 2036 ts_tablet_manager.cc:1403] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:17.477366 2036 raft_consensus.cc:359] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.477419 2036 raft_consensus.cc:385] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:17.477432 2036 raft_consensus.cc:740] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Initialized, Role: FOLLOWER
I20251025 14:09:17.477466 2036 consensus_queue.cc:260] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.477502 2036 raft_consensus.cc:399] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:17.477530 2036 raft_consensus.cc:493] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:17.477545 2036 raft_consensus.cc:3060] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:17.477994 2036 raft_consensus.cc:515] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.478044 2036 leader_election.cc:304] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c073aa8b46f043319f11fa30f2ef6079; no voters:
I20251025 14:09:17.478077 2036 leader_election.cc:290] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:17.478142 2038 raft_consensus.cc:2804] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:17.478164 2036 ts_tablet_manager.cc:1434] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Time spent starting tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:17.478200 2038 raft_consensus.cc:697] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 LEADER]: Becoming Leader. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Running, Role: LEADER
I20251025 14:09:17.478224 2036 tablet_bootstrap.cc:492] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap starting.
I20251025 14:09:17.478236 2038 consensus_queue.cc:237] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.478612 2036 tablet_bootstrap.cc:654] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Neither blocks nor log segments found. Creating new log.
I20251025 14:09:17.478699 1868 catalog_manager.cc:5649] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 reported cstate change: term changed from 0 to 1, leader changed from <none> to c073aa8b46f043319f11fa30f2ef6079 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:17.479187 2036 tablet_bootstrap.cc:492] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: No bootstrap required, opened a new log
I20251025 14:09:17.479234 2036 ts_tablet_manager.cc:1403] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Time spent bootstrapping tablet: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:17.479365 2036 raft_consensus.cc:359] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.479398 2036 raft_consensus.cc:385] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20251025 14:09:17.479410 2036 raft_consensus.cc:740] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Initialized, Role: FOLLOWER
I20251025 14:09:17.479444 2036 consensus_queue.cc:260] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.479477 2036 raft_consensus.cc:399] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:17.479496 2036 raft_consensus.cc:493] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:17.479513 2036 raft_consensus.cc:3060] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 0 FOLLOWER]: Advancing to term 1
I20251025 14:09:17.479941 2036 raft_consensus.cc:515] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.479981 2036 leader_election.cc:304] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c073aa8b46f043319f11fa30f2ef6079; no voters:
I20251025 14:09:17.480015 2036 leader_election.cc:290] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 1 election: Requested vote from peers
I20251025 14:09:17.480062 2039 raft_consensus.cc:2804] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Leader election won for term 1
I20251025 14:09:17.480080 2036 ts_tablet_manager.cc:1434] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:17.480111 2039 raft_consensus.cc:697] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 LEADER]: Becoming Leader. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Running, Role: LEADER
I20251025 14:09:17.480152 2039 consensus_queue.cc:237] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.480654 1868 catalog_manager.cc:5649] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 reported cstate change: term changed from 0 to 1, leader changed from <none> to c073aa8b46f043319f11fa30f2ef6079 (127.30.194.193). New cstate: current_term: 1 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:17.539189 31499 master.cc:561] Master@127.30.194.254:42933 shutting down...
I20251025 14:09:17.541707 31499 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:17.541770 31499 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:17.541805 31499 tablet_replica.cc:333] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: stopping tablet replica
I20251025 14:09:17.554044 31499 master.cc:583] Master@127.30.194.254:42933 shutdown complete.
I20251025 14:09:17.555491 31499 tablet_server.cc:178] TabletServer@127.30.194.193:0 shutting down...
I20251025 14:09:17.558413 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:09:17.558713 31499 tablet_replica.cc:333] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: stopping tablet replica
I20251025 14:09:17.558784 31499 raft_consensus.cc:2243] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:17.558828 31499 raft_consensus.cc:2272] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:17.559104 31499 tablet_replica.cc:333] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: stopping tablet replica
I20251025 14:09:17.559165 31499 raft_consensus.cc:2243] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:17.559216 31499 raft_consensus.cc:2272] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:17.559548 31499 tablet_replica.cc:333] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: stopping tablet replica
I20251025 14:09:17.559602 31499 raft_consensus.cc:2243] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 LEADER]: Raft consensus shutting down.
I20251025 14:09:17.559650 31499 raft_consensus.cc:2272] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:17.571914 31499 tablet_server.cc:195] TabletServer@127.30.194.193:0 shutdown complete.
I20251025 14:09:17.573292 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:17.574441 2074 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:17.574657 31499 server_base.cc:1047] running on GCE node
W20251025 14:09:17.574657 2075 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:17.574745 2077 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:17.574834 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:17.574870 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:17.574891 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401357574891 us; error 0 us; skew 500 ppm
I20251025 14:09:17.575397 31499 webserver.cc:492] Webserver started at http://127.30.194.193:43907/ using document root <none> and password file <none>
I20251025 14:09:17.575474 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:17.575512 31499 fs_manager.cc:365] Using existing metadata directory in first data directory
I20251025 14:09:17.576090 31499 fs_manager.cc:714] Time spent opening directory manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:09:17.576673 2082 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:17.576802 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:09:17.576850 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root
uuid: "c073aa8b46f043319f11fa30f2ef6079"
format_stamp: "Formatted at 2025-10-25 14:09:16 on dist-test-slave-v4l2"
I20251025 14:09:17.576891 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:17.586450 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:17.586613 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:17.587033 2090 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
W20251025 14:09:17.587323 2088 txn_system_client.cc:479] unable to initialize TxnSystemClient, will retry in 1.000s: Network error: Client connection negotiation failed: client connection to 127.30.194.254:42933: connect: Connection refused (error 111)
I20251025 14:09:17.588459 31499 ts_tablet_manager.cc:585] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20251025 14:09:17.588508 31499 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.002s user 0.000s sys 0.000s
I20251025 14:09:17.588531 31499 ts_tablet_manager.cc:600] Registering tablets (0/3 complete)
I20251025 14:09:17.588903 2090 tablet_bootstrap.cc:492] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap starting.
I20251025 14:09:17.589524 31499 ts_tablet_manager.cc:616] Registered 3 tablets
I20251025 14:09:17.589574 31499 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s user 0.001s sys 0.000s
I20251025 14:09:17.590202 2090 tablet_bootstrap.cc:492] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=1 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:17.590436 2090 tablet_bootstrap.cc:492] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap complete.
I20251025 14:09:17.590549 2090 ts_tablet_manager.cc:1403] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Time spent bootstrapping tablet: real 0.002s user 0.001s sys 0.000s
I20251025 14:09:17.590672 2090 raft_consensus.cc:359] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.590739 2090 raft_consensus.cc:740] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Initialized, Role: FOLLOWER
I20251025 14:09:17.590788 2090 consensus_queue.cc:260] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.590839 2090 raft_consensus.cc:399] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:17.590860 2090 raft_consensus.cc:493] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:17.590888 2090 raft_consensus.cc:3060] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:17.591383 2090 raft_consensus.cc:515] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.591441 2090 leader_election.cc:304] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c073aa8b46f043319f11fa30f2ef6079; no voters:
I20251025 14:09:17.591557 2090 leader_election.cc:290] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:17.591601 2128 raft_consensus.cc:2804] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:17.591701 2090 ts_tablet_manager.cc:1434] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079: Time spent starting tablet: real 0.001s user 0.002s sys 0.000s
I20251025 14:09:17.591744 2128 raft_consensus.cc:697] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 2 LEADER]: Becoming Leader. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Running, Role: LEADER
I20251025 14:09:17.591765 2090 tablet_bootstrap.cc:492] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap starting.
I20251025 14:09:17.591789 2128 consensus_queue.cc:237] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.591965 2131 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } } }
I20251025 14:09:17.592041 2131 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:17.592155 2134 tablet_replica.cc:442] TxnStatusTablet state changed. Reason: New leader c073aa8b46f043319f11fa30f2ef6079. Latest consensus state: current_term: 2 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } } }
I20251025 14:09:17.592207 2134 tablet_replica.cc:445] This TxnStatusTablet replica's current role is: LEADER
I20251025 14:09:17.592295 2143 txn_status_manager.cc:874] Waiting until node catch up with all replicated operations in previous term...
I20251025 14:09:17.592350 2143 txn_status_manager.cc:930] Loading transaction status metadata into memory...
I20251025 14:09:17.592898 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.193:43461
W20251025 14:09:17.594076 2050 txn_manager_proxy_rpc.cc:150] re-attempting BeginTransaction request to one of TxnManagers
I20251025 14:09:17.594089 2156 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.193:43461 every 8 connection(s)
W20251025 14:09:17.594374 2157 heartbeater.cc:646] Failed to heartbeat to 127.30.194.254:42933 (0 consecutive failures): Network error: Failed to ping master at 127.30.194.254:42933: Client connection negotiation failed: client connection to 127.30.194.254:42933: connect: Connection refused (error 111)
W20251025 14:09:17.594756 2157 heartbeater.cc:412] Failed 3 heartbeats in a row: no longer allowing fast heartbeat attempts.
I20251025 14:09:17.602591 2090 tablet_bootstrap.cc:492] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap replayed 1/1 log segments. Stats: ops{read=91 overwritten=0 applied=91 ignored=0} inserts{seen=2269 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:17.602823 2090 tablet_bootstrap.cc:492] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Bootstrap complete.
I20251025 14:09:17.602936 2090 ts_tablet_manager.cc:1403] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Time spent bootstrapping tablet: real 0.011s user 0.008s sys 0.000s
I20251025 14:09:17.603049 2090 raft_consensus.cc:359] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.603099 2090 raft_consensus.cc:740] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Initialized, Role: FOLLOWER
I20251025 14:09:17.603142 2090 consensus_queue.cc:260] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 91, Last appended: 1.91, Last appended by leader: 91, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.603183 2090 raft_consensus.cc:399] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:17.603201 2090 raft_consensus.cc:493] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:17.603237 2090 raft_consensus.cc:3060] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:17.603737 2090 raft_consensus.cc:515] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.603791 2090 leader_election.cc:304] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c073aa8b46f043319f11fa30f2ef6079; no voters:
I20251025 14:09:17.603822 2090 leader_election.cc:290] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:17.603892 2131 raft_consensus.cc:2804] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:17.603914 2090 ts_tablet_manager.cc:1434] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:17.603948 2131 raft_consensus.cc:697] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 2 LEADER]: Becoming Leader. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Running, Role: LEADER
I20251025 14:09:17.603969 2090 tablet_bootstrap.cc:492] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Bootstrap starting.
I20251025 14:09:17.603993 2131 consensus_queue.cc:237] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 91, Committed index: 91, Last appended: 1.91, Last appended by leader: 91, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.612874 2090 tablet_bootstrap.cc:492] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Bootstrap replayed 1/1 log segments. Stats: ops{read=91 overwritten=0 applied=91 ignored=0} inserts{seen=2231 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:17.613184 2090 tablet_bootstrap.cc:492] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Bootstrap complete.
I20251025 14:09:17.613297 2090 ts_tablet_manager.cc:1403] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Time spent bootstrapping tablet: real 0.009s user 0.009s sys 0.000s
I20251025 14:09:17.613396 2090 raft_consensus.cc:359] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.613440 2090 raft_consensus.cc:740] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Initialized, Role: FOLLOWER
I20251025 14:09:17.613476 2090 consensus_queue.cc:260] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 91, Last appended: 1.91, Last appended by leader: 91, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.613519 2090 raft_consensus.cc:399] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:17.613548 2090 raft_consensus.cc:493] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:17.613574 2090 raft_consensus.cc:3060] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:17.614032 2090 raft_consensus.cc:515] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
I20251025 14:09:17.614084 2090 leader_election.cc:304] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c073aa8b46f043319f11fa30f2ef6079; no voters:
I20251025 14:09:17.614115 2090 leader_election.cc:290] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:17.614184 2134 raft_consensus.cc:2804] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:17.614207 2090 ts_tablet_manager.cc:1434] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079: Time spent starting tablet: real 0.001s user 0.000s sys 0.000s
I20251025 14:09:17.614236 2134 raft_consensus.cc:697] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 2 LEADER]: Becoming Leader. State: Replica: c073aa8b46f043319f11fa30f2ef6079, State: Running, Role: LEADER
I20251025 14:09:17.614276 2134 consensus_queue.cc:237] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 91, Committed index: 91, Last appended: 1.91, Last appended by leader: 91, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } }
W20251025 14:09:17.690023 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:18.690411 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:19.165087 2050 txn_manager_proxy_rpc.cc:150] re-attempting BeginTransaction request to one of TxnManagers
W20251025 14:09:19.690433 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:20.701182 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:21.469723 2050 txn_manager_proxy_rpc.cc:150] re-attempting BeginTransaction request to one of TxnManagers
W20251025 14:09:21.703100 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:22.708379 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:23.708436 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:24.542142 2050 txn_manager_proxy_rpc.cc:150] re-attempting BeginTransaction request to one of TxnManagers
W20251025 14:09:24.713521 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:25.715617 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:26.715941 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:27.604152 2143 txn_status_manager.cc:670] Unable to initialize TxnSystemClient: Timed out: Unable to get client in 10.000s: Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:27.604226 2143 txn_status_manager.cc:931] Time spent Loading transaction status metadata into memory: real 10.012s user 0.003s sys 0.000s
W20251025 14:09:27.701051 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.108s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:27.717064 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
W20251025 14:09:27.801276 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.209s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:27.901533 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.309s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:28.001750 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.409s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:28.101961 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.509s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:28.202183 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.609s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
W20251025 14:09:28.302404 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.710s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
I20251025 14:09:28.383560 31499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20251025 14:09:28.385020 2175 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:28.385130 2173 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20251025 14:09:28.385011 2172 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20251025 14:09:28.385190 31499 server_base.cc:1047] running on GCE node
I20251025 14:09:28.385289 31499 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20251025 14:09:28.385324 31499 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20251025 14:09:28.385341 31499 hybrid_clock.cc:648] HybridClock initialized: now 1761401368385341 us; error 0 us; skew 500 ppm
I20251025 14:09:28.385922 31499 webserver.cc:492] Webserver started at http://127.30.194.254:33795/ using document root <none> and password file <none>
I20251025 14:09:28.386008 31499 fs_manager.cc:362] Metadata directory not provided
I20251025 14:09:28.386052 31499 fs_manager.cc:365] Using existing metadata directory in first data directory
I20251025 14:09:28.386593 31499 fs_manager.cc:714] Time spent opening directory manager: real 0.000s user 0.001s sys 0.000s
I20251025 14:09:28.387045 2180 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:28.387198 31499 fs_manager.cc:730] Time spent opening block manager: real 0.000s user 0.000s sys 0.000s
I20251025 14:09:28.387243 31499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root
uuid: "37e89ac31daf4afca583bb8e40a4c9f1"
format_stamp: "Formatted at 2025-10-25 14:09:16 on dist-test-slave-v4l2"
I20251025 14:09:28.387300 31499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskcRt9OY/test-tmp/txn_commit-itest.0.TxnCommitITest.TestLoadTxnStatusManagerWhenNoMasters.1761401299035012-31499-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20251025 14:09:28.395949 31499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20251025 14:09:28.396131 31499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20251025 14:09:28.399977 31499 rpc_server.cc:307] RPC server started. Bound to: 127.30.194.254:42933
I20251025 14:09:28.400676 2242 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.30.194.254:42933 every 8 connection(s)
I20251025 14:09:28.401257 2243 sys_catalog.cc:263] Verifying existing consensus state
I20251025 14:09:28.401618 2243 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Bootstrap starting.
W20251025 14:09:28.402621 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.810s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
I20251025 14:09:28.403335 2243 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=8 ignored=0} mutations{seen=6 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20251025 14:09:28.403546 2243 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Bootstrap complete.
I20251025 14:09:28.403746 2243 raft_consensus.cc:359] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:28.403802 2243 raft_consensus.cc:740] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 37e89ac31daf4afca583bb8e40a4c9f1, State: Initialized, Role: FOLLOWER
I20251025 14:09:28.403842 2243 consensus_queue.cc:260] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 1.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:28.403879 2243 raft_consensus.cc:399] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20251025 14:09:28.403908 2243 raft_consensus.cc:493] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20251025 14:09:28.403935 2243 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 1 FOLLOWER]: Advancing to term 2
I20251025 14:09:28.404407 2243 raft_consensus.cc:515] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:28.404464 2243 leader_election.cc:304] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 37e89ac31daf4afca583bb8e40a4c9f1; no voters:
I20251025 14:09:28.404554 2243 leader_election.cc:290] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [CANDIDATE]: Term 2 election: Requested vote from peers
I20251025 14:09:28.404608 2246 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 2 FOLLOWER]: Leader election won for term 2
I20251025 14:09:28.404706 2243 sys_catalog.cc:565] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: configured and running, proceeding with master startup.
I20251025 14:09:28.404742 2246 raft_consensus.cc:697] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [term 2 LEADER]: Becoming Leader. State: Replica: 37e89ac31daf4afca583bb8e40a4c9f1, State: Running, Role: LEADER
I20251025 14:09:28.404831 2246 consensus_queue.cc:237] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 1.11, Last appended by leader: 11, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER }
I20251025 14:09:28.405054 2247 sys_catalog.cc:455] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER } }
I20251025 14:09:28.405133 2247 sys_catalog.cc:458] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:28.405058 2248 sys_catalog.cc:455] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 37e89ac31daf4afca583bb8e40a4c9f1. Latest consensus state: current_term: 2 leader_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "37e89ac31daf4afca583bb8e40a4c9f1" member_type: VOTER } }
I20251025 14:09:28.405284 2248 sys_catalog.cc:458] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1 [sys.catalog]: This master's current role is: LEADER
I20251025 14:09:28.405464 2252 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20251025 14:09:28.405690 2252 catalog_manager.cc:679] Loaded metadata for table kudu_system.kudu_transactions [id=4d9e0be4bd7a40a985c0daedd5ce1b8a]
I20251025 14:09:28.406793 2252 catalog_manager.cc:679] Loaded metadata for table test-workload [id=8e34de89b2f148ddaa6c0b512301a048]
I20251025 14:09:28.406996 2252 tablet_loader.cc:96] loaded metadata for tablet 1a1dfe0b8b5f4b3aa62df39997304493 (table test-workload [id=8e34de89b2f148ddaa6c0b512301a048])
I20251025 14:09:28.407045 2252 tablet_loader.cc:96] loaded metadata for tablet 2b0fbc34f55e444bbd7a7e40b472f6ad (table test-workload [id=8e34de89b2f148ddaa6c0b512301a048])
I20251025 14:09:28.407088 2252 tablet_loader.cc:96] loaded metadata for tablet 8ab9e39c5edb4dc8adf2562eaa122345 (table kudu_system.kudu_transactions [id=4d9e0be4bd7a40a985c0daedd5ce1b8a])
I20251025 14:09:28.407265 2252 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20251025 14:09:28.407486 2252 catalog_manager.cc:1269] Loaded cluster ID: 17d85c7e293c49dc89d398e0c61bbf42
I20251025 14:09:28.407572 2252 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20251025 14:09:28.408480 2252 catalog_manager.cc:1514] Loading token signing keys...
I20251025 14:09:28.408622 2252 catalog_manager.cc:6033] T 00000000000000000000000000000000 P 37e89ac31daf4afca583bb8e40a4c9f1: Loaded TSK: 0
I20251025 14:09:28.408752 2252 catalog_manager.cc:1524] Initializing in-progress tserver states...
W20251025 14:09:28.411861 2050 txn_manager_proxy_rpc.cc:150] re-attempting BeginTransaction request to one of TxnManagers
I20251025 14:09:28.413225 2197 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39532:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
W20251025 14:09:28.413601 2243 master.cc:464] Invalid argument: unable to initialize TxnManager: Error creating table kudu_system.kudu_transactions on the master: not enough live tablet servers to create a table with the requested replication factor 1; 0 tablet servers are alive: unable to init TxnManager, will retry in 1.000s
W20251025 14:09:28.502823 2089 txn_status_manager.cc:1397] failed to abort stale txn (ID 0) past 10.910s from last keepalive heartbeat (effective timeout is 0.300s): Service unavailable: could not get TxnSystemClient, still initializing
I20251025 14:09:28.604044 2089 txn_status_manager.cc:1391] automatically aborted stale txn (ID 0) past 11.010s from last keepalive heartbeat (effective timeout is 0.300s)
I20251025 14:09:28.620162 2157 heartbeater.cc:344] Connected to a master server at 127.30.194.254:42933
I20251025 14:09:28.620242 2157 heartbeater.cc:461] Registering TS with master...
I20251025 14:09:28.620358 2157 heartbeater.cc:507] Master 127.30.194.254:42933 requested a full tablet report, sending...
I20251025 14:09:28.620637 2197 ts_manager.cc:194] Registered new tserver with Master: c073aa8b46f043319f11fa30f2ef6079 (127.30.194.193:43461)
I20251025 14:09:28.620697 2197 catalog_manager.cc:5649] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:28.620774 2197 catalog_manager.cc:5649] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:28.620805 2197 catalog_manager.cc:5649] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "c073aa8b46f043319f11fa30f2ef6079" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c073aa8b46f043319f11fa30f2ef6079" member_type: VOTER last_known_addr { host: "127.30.194.193" port: 43461 } health_report { overall_health: HEALTHY } } }
I20251025 14:09:28.621930 2197 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:39542
W20251025 14:09:28.718806 2050 txn_manager_proxy_rpc.cc:150] re-attempting KeepTransactionAlive request to one of TxnManagers
I20251025 14:09:29.418499 2197 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39552:
name: "kudu_system.kudu_transactions"
schema {
columns {
name: "txn_id"
type: INT64
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "entry_type"
type: INT8
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "identifier"
type: STRING
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "metadata"
type: STRING
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
rows: "\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000""\006\001\000\000\000\000\000\000\000\000\007\001@B\017\000\000\000\000\000"
indirect_data: """"
}
partition_schema {
range_schema {
columns {
name: "txn_id"
}
}
}
table_type: TXN_STATUS_TABLE
I20251025 14:09:29.427677 31499 tablet_server.cc:178] TabletServer@127.30.194.193:43461 shutting down...
I20251025 14:09:29.430491 31499 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20251025 14:09:29.430629 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:29.430696 31499 raft_consensus.cc:2243] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:09:29.430727 31499 raft_consensus.cc:2272] T 2b0fbc34f55e444bbd7a7e40b472f6ad P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:29.430955 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:29.430992 31499 raft_consensus.cc:2243] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:09:29.431030 31499 raft_consensus.cc:2272] T 8ab9e39c5edb4dc8adf2562eaa122345 P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Raft consensus is shut down!
I20251025 14:09:29.431223 31499 tablet_replica.cc:333] stopping tablet replica
I20251025 14:09:29.431255 31499 raft_consensus.cc:2243] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 2 LEADER]: Raft consensus shutting down.
I20251025 14:09:29.431279 31499 raft_consensus.cc:2272] T 1a1dfe0b8b5f4b3aa62df39997304493 P c073aa8b46f043319f11fa30f2ef6079 [term 2 FOLLOWER]: Raft consensus is shut down!
W20251025 14:09:29.434355 2283 meta_cache.cc:302] tablet 8ab9e39c5edb4dc8adf2562eaa122345: replica c073aa8b46f043319f11fa30f2ef6079 (127.30.194.193:43461) has failed: Network error: Client connection negotiation failed: client connection to 127.30.194.193:43461: connect: Connection refused (error 111)
I20251025 14:09:29.442970 31499 tablet_server.cc:195] TabletServer@127.30.194.193:43461 shutdown complete.
I20251025 14:09:29.444039 31499 master.cc:561] Master@127.30.194.254:42933 shutting down...
W20251025 14:09:29.447639 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:30.445991 31499 thread.cc:527] Waited for 1000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:30.460091 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:31.446238 31499 thread.cc:527] Waited for 2000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:31.488638 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:32.446484 31499 thread.cc:527] Waited for 3000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:32.559162 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:33.446723 31499 thread.cc:527] Waited for 4000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:33.582757 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:34.446969 31499 thread.cc:527] Waited for 5000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:34.640872 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:35.447232 31499 thread.cc:527] Waited for 6000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:35.712713 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:36.447475 31499 thread.cc:527] Waited for 7000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:36.764218 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:37.447721 31499 thread.cc:527] Waited for 8000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:37.766986 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:38.447968 31499 thread.cc:527] Waited for 9000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:38.839022 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:39.448212 31499 thread.cc:527] Waited for 10000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:39.970652 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:40.448470 31499 thread.cc:527] Waited for 11000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:41.012660 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:41.448722 31499 thread.cc:527] Waited for 12000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:42.098995 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:42.448962 31499 thread.cc:527] Waited for 13000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:43.231407 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:43.449224 31499 thread.cc:527] Waited for 14000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:44.241072 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:44.449483 31499 thread.cc:527] Waited for 15000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:45.290620 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:45.449738 31499 thread.cc:527] Waited for 16000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:46.378295 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:46.449990 31499 thread.cc:527] Waited for 17000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:47.450215 31499 thread.cc:527] Waited for 18000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:47.505393 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:48.450426 31499 thread.cc:527] Waited for 19000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:48.660981 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:49.450629 31499 thread.cc:527] Waited for 20000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:49.852948 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:50.450856 31499 thread.cc:527] Waited for 21000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:50.873966 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:51.451131 31499 thread.cc:527] Waited for 22000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:51.922807 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:52.451395 31499 thread.cc:527] Waited for 23000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:53.000527 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:53.451683 31499 thread.cc:527] Waited for 24000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:54.106519 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:54.451936 31499 thread.cc:527] Waited for 25000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:55.235674 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:55.452175 31499 thread.cc:527] Waited for 26000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:56.389680 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:56.452417 31499 thread.cc:527] Waited for 27000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:57.452657 31499 thread.cc:527] Waited for 28000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:57.570117 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:58.452915 31499 thread.cc:527] Waited for 29000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:58.766669 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:09:59.453208 31499 thread.cc:527] Waited for 30000ms trying to join with rpc worker (tid 2227)
W20251025 14:09:59.990409 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:00.453455 31499 thread.cc:527] Waited for 31000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:01.242241 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:01.453717 31499 thread.cc:527] Waited for 32000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:02.259941 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:02.453977 31499 thread.cc:527] Waited for 33000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:03.291402 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:03.454245 31499 thread.cc:527] Waited for 34000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:04.337411 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:04.454504 31499 thread.cc:527] Waited for 35000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:05.398425 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:05.454731 31499 thread.cc:527] Waited for 36000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:06.454964 31499 thread.cc:527] Waited for 37000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:06.486243 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:07.455240 31499 thread.cc:527] Waited for 38000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:07.579197 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:08.455482 31499 thread.cc:527] Waited for 39000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:08.694119 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:09.455750 31499 thread.cc:527] Waited for 40000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:09.823238 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:10.456017 31499 thread.cc:527] Waited for 41000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:10.966310 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:11.456271 31499 thread.cc:527] Waited for 42000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:12.129457 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:12.456538 31499 thread.cc:527] Waited for 43000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:13.308429 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:13.456800 31499 thread.cc:527] Waited for 44000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:14.457055 31499 thread.cc:527] Waited for 45000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:14.503350 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:15.457326 31499 thread.cc:527] Waited for 46000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:15.711544 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:16.457598 31499 thread.cc:527] Waited for 47000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:16.936897 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:17.457859 31499 thread.cc:527] Waited for 48000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:18.178145 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:18.458114 31499 thread.cc:527] Waited for 49000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:19.435619 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:19.458374 31499 thread.cc:527] Waited for 50000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:20.458568 31499 thread.cc:527] Waited for 51000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:20.707026 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:21.458827 31499 thread.cc:527] Waited for 52000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:21.997637 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:22.459087 31499 thread.cc:527] Waited for 53000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:23.307089 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:23.459371 31499 thread.cc:527] Waited for 54000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:24.459640 31499 thread.cc:527] Waited for 55000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:24.628350 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:25.459908 31499 thread.cc:527] Waited for 56000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:25.631724 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:26.460163 31499 thread.cc:527] Waited for 57000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:26.643404 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:27.460443 31499 thread.cc:527] Waited for 58000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:27.664886 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:28.460711 31499 thread.cc:527] Waited for 59000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:28.690071 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:29.460978 31499 thread.cc:527] Waited for 60000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:29.728550 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:30.461306 31499 thread.cc:527] Waited for 61000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:30.775025 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:31.461555 31499 thread.cc:527] Waited for 62000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:31.834398 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:32.461805 31499 thread.cc:527] Waited for 63000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:32.897760 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:33.462040 31499 thread.cc:527] Waited for 64000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:33.972186 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:34.462306 31499 thread.cc:527] Waited for 65000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:35.054008 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:35.462553 31499 thread.cc:527] Waited for 66000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:36.141604 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:36.462797 31499 thread.cc:527] Waited for 67000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:37.246357 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:37.463054 31499 thread.cc:527] Waited for 68000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:38.352928 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:38.463343 31499 thread.cc:527] Waited for 69000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:39.463624 31499 thread.cc:527] Waited for 70000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:39.472433 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:40.463886 31499 thread.cc:527] Waited for 71000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:40.603765 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:41.464131 31499 thread.cc:527] Waited for 72000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:41.738426 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:42.464383 31499 thread.cc:527] Waited for 73000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:42.885969 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:43.464632 31499 thread.cc:527] Waited for 74000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:44.040691 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:44.464885 31499 thread.cc:527] Waited for 75000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:45.206267 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:45.465160 31499 thread.cc:527] Waited for 76000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:46.382108 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:46.465406 31499 thread.cc:527] Waited for 77000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:47.465652 31499 thread.cc:527] Waited for 78000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:47.564862 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:48.465880 31499 thread.cc:527] Waited for 79000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:48.757350 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:49.466102 31499 thread.cc:527] Waited for 80000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:49.958900 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:50.466359 31499 thread.cc:527] Waited for 81000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:51.169564 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:51.466630 31499 thread.cc:527] Waited for 82000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:52.384721 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:52.466840 31499 thread.cc:527] Waited for 83000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:53.467038 31499 thread.cc:527] Waited for 84000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:53.609524 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:54.467298 31499 thread.cc:527] Waited for 85000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:54.847582 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:55.467556 31499 thread.cc:527] Waited for 86000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:56.095268 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:56.467804 31499 thread.cc:527] Waited for 87000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:57.348320 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:57.468032 31499 thread.cc:527] Waited for 88000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:58.468264 31499 thread.cc:527] Waited for 89000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:58.611109 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:10:59.468498 31499 thread.cc:527] Waited for 90000ms trying to join with rpc worker (tid 2227)
W20251025 14:10:59.885874 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:00.468770 31499 thread.cc:527] Waited for 91000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:01.169924 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:01.469056 31499 thread.cc:527] Waited for 92000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:02.457835 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:02.469362 31499 thread.cc:527] Waited for 93000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:03.469604 31499 thread.cc:527] Waited for 94000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:03.757993 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:04.469858 31499 thread.cc:527] Waited for 95000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:05.065881 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:05.470113 31499 thread.cc:527] Waited for 96000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:06.385890 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:06.470342 31499 thread.cc:527] Waited for 97000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:07.470573 31499 thread.cc:527] Waited for 98000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:07.714514 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:08.470790 31499 thread.cc:527] Waited for 99000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:09.047120 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:09.471024 31499 thread.cc:527] Waited for 100000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:10.390738 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:10.471361 31499 thread.cc:527] Waited for 101000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:11.471619 31499 thread.cc:527] Waited for 102000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:11.745358 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:12.471881 31499 thread.cc:527] Waited for 103000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:13.107740 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:13.472138 31499 thread.cc:527] Waited for 104000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:14.472389 31499 thread.cc:527] Waited for 105000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:14.478523 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:15.472631 31499 thread.cc:527] Waited for 106000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:15.862709 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:16.472950 31499 thread.cc:527] Waited for 107000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:17.255728 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:17.473278 31499 thread.cc:527] Waited for 108000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:18.473511 31499 thread.cc:527] Waited for 109000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:18.653578 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:19.473762 31499 thread.cc:527] Waited for 110000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:20.057771 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:20.474002 31499 thread.cc:527] Waited for 111000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:21.472862 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:21.474193 31499 thread.cc:527] Waited for 112000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:22.474370 31499 thread.cc:527] Waited for 113000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:22.901058 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:23.474604 31499 thread.cc:527] Waited for 114000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:24.338326 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:24.474853 31499 thread.cc:527] Waited for 115000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:25.475092 31499 thread.cc:527] Waited for 116000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:25.786518 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:26.475328 31499 thread.cc:527] Waited for 117000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:27.236714 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:27.475571 31499 thread.cc:527] Waited for 118000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:28.475844 31499 thread.cc:527] Waited for 119000ms trying to join with rpc worker (tid 2227)
I20251025 14:11:28.500428 2186 maintenance_manager.cc:419] P 37e89ac31daf4afca583bb8e40a4c9f1: Scheduling FlushMRSOp(00000000000000000000000000000000): perf score=0.033360
I20251025 14:11:28.504973 2185 maintenance_manager.cc:643] P 37e89ac31daf4afca583bb8e40a4c9f1: FlushMRSOp(00000000000000000000000000000000) complete. Timing: real 0.004s user 0.003s sys 0.000s Metrics: {"bytes_written":7334,"cfile_init":1,"dirs.queue_time_us":132,"dirs.run_cpu_time_us":172,"dirs.run_wall_time_us":894,"drs_written":1,"lbm_read_time_us":14,"lbm_reads_lt_1ms":4,"lbm_write_time_us":141,"lbm_writes_lt_1ms":27,"peak_mem_usage":0,"rows_written":8,"thread_start_us":135,"threads_started":2,"wal-append.queue_time_us":122}
I20251025 14:11:28.505252 2186 maintenance_manager.cc:419] P 37e89ac31daf4afca583bb8e40a4c9f1: Scheduling UndoDeltaBlockGCOp(00000000000000000000000000000000): 637 bytes on disk
I20251025 14:11:28.505493 2185 maintenance_manager.cc:643] P 37e89ac31daf4afca583bb8e40a4c9f1: UndoDeltaBlockGCOp(00000000000000000000000000000000) complete. Timing: real 0.000s user 0.000s sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":21,"lbm_reads_lt_1ms":4}
W20251025 14:11:28.700815 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:29.476099 31499 thread.cc:527] Waited for 120000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:30.172281 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:30.476347 31499 thread.cc:527] Waited for 121000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:31.476596 31499 thread.cc:527] Waited for 122000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:31.648663 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:32.476863 31499 thread.cc:527] Waited for 123000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:33.140013 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:33.477099 31499 thread.cc:527] Waited for 124000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:34.477330 31499 thread.cc:527] Waited for 125000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:34.638473 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:35.477574 31499 thread.cc:527] Waited for 126000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:35.643474 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:36.477818 31499 thread.cc:527] Waited for 127000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:36.652429 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:37.478046 31499 thread.cc:527] Waited for 128000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:37.664153 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:38.478271 31499 thread.cc:527] Waited for 129000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:38.677804 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:39.478484 31499 thread.cc:527] Waited for 130000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:39.698510 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:40.478708 31499 thread.cc:527] Waited for 131000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:40.724764 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:41.478920 31499 thread.cc:527] Waited for 132000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:41.754716 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:42.479168 31499 thread.cc:527] Waited for 133000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:42.787557 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:43.479416 31499 thread.cc:527] Waited for 134000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:43.823532 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:44.479681 31499 thread.cc:527] Waited for 135000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:44.863320 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:45.480046 31499 thread.cc:527] Waited for 136000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:45.908248 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:46.480300 31499 thread.cc:527] Waited for 137000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:46.956288 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:47.480537 31499 thread.cc:527] Waited for 138000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:48.007185 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:48.480783 31499 thread.cc:527] Waited for 139000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:49.059269 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:49.481036 31499 thread.cc:527] Waited for 140000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:50.119421 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:50.481314 31499 thread.cc:527] Waited for 141000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:51.184271 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:51.481575 31499 thread.cc:527] Waited for 142000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:52.252408 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:52.481837 31499 thread.cc:527] Waited for 143000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:53.321360 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:53.482100 31499 thread.cc:527] Waited for 144000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:54.395076 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:54.482370 31499 thread.cc:527] Waited for 145000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:55.472982 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:55.482856 31499 thread.cc:527] Waited for 146000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:56.483382 31499 thread.cc:527] Waited for 147000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:56.555088 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:57.483639 31499 thread.cc:527] Waited for 148000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:57.645978 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:58.483924 31499 thread.cc:527] Waited for 149000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:58.735020 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:11:59.484254 31499 thread.cc:527] Waited for 150000ms trying to join with rpc worker (tid 2227)
W20251025 14:11:59.830951 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:00.484503 31499 thread.cc:527] Waited for 151000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:00.931833 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:01.484699 31499 thread.cc:527] Waited for 152000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:02.038810 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:02.484938 31499 thread.cc:527] Waited for 153000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:03.146646 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:03.485311 31499 thread.cc:527] Waited for 154000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:04.256524 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:04.485562 31499 thread.cc:527] Waited for 155000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:05.373291 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:05.485813 31499 thread.cc:527] Waited for 156000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:06.486061 31499 thread.cc:527] Waited for 157000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:06.491246 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:07.486337 31499 thread.cc:527] Waited for 158000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:07.612406 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:08.486601 31499 thread.cc:527] Waited for 159000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:08.743599 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:09.486850 31499 thread.cc:527] Waited for 160000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:09.873752 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:10.487097 31499 thread.cc:527] Waited for 161000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:11.010756 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:11.487345 31499 thread.cc:527] Waited for 162000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:12.150966 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:12.487617 31499 thread.cc:527] Waited for 163000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:13.293123 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:13.487852 31499 thread.cc:527] Waited for 164000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:14.442579 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:14.488094 31499 thread.cc:527] Waited for 165000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:15.488286 31499 thread.cc:527] Waited for 166000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:15.596728 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:16.488484 31499 thread.cc:527] Waited for 167000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:16.750043 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:17.488703 31499 thread.cc:527] Waited for 168000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:17.908221 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:18.488950 31499 thread.cc:527] Waited for 169000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:19.072579 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:19.489194 31499 thread.cc:527] Waited for 170000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:20.242866 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:20.489424 31499 thread.cc:527] Waited for 171000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:21.414857 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:21.489642 31499 thread.cc:527] Waited for 172000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:22.489871 31499 thread.cc:527] Waited for 173000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:22.587903 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:23.490100 31499 thread.cc:527] Waited for 174000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:23.770282 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:24.490316 31499 thread.cc:527] Waited for 175000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:24.952445 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:25.490515 31499 thread.cc:527] Waited for 176000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:26.140511 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:26.490741 31499 thread.cc:527] Waited for 177000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:27.327648 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:27.490969 31499 thread.cc:527] Waited for 178000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:28.491205 31499 thread.cc:527] Waited for 179000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:28.525727 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:29.491438 31499 thread.cc:527] Waited for 180000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:29.728829 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:30.491636 31499 thread.cc:527] Waited for 181000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:30.933895 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:31.491859 31499 thread.cc:527] Waited for 182000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:32.143180 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:32.492103 31499 thread.cc:527] Waited for 183000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:33.356477 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:33.492338 31499 thread.cc:527] Waited for 184000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:34.492580 31499 thread.cc:527] Waited for 185000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:34.572726 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:35.492811 31499 thread.cc:527] Waited for 186000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:35.790059 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:36.493050 31499 thread.cc:527] Waited for 187000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:37.013417 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:37.493263 31499 thread.cc:527] Waited for 188000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:38.243880 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:38.493506 31499 thread.cc:527] Waited for 189000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:39.477160 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:39.493747 31499 thread.cc:527] Waited for 190000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:40.493990 31499 thread.cc:527] Waited for 191000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:40.710742 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:41.494217 31499 thread.cc:527] Waited for 192000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:41.946043 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:42.494441 31499 thread.cc:527] Waited for 193000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:43.189759 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:43.494781 31499 thread.cc:527] Waited for 194000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:44.437896 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:44.494992 31499 thread.cc:527] Waited for 195000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:45.495204 31499 thread.cc:527] Waited for 196000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:45.688235 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:46.495453 31499 thread.cc:527] Waited for 197000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:46.943359 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:47.495715 31499 thread.cc:527] Waited for 198000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:48.203960 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:48.496090 31499 thread.cc:527] Waited for 199000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:49.467438 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:49.496341 31499 thread.cc:527] Waited for 200000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:50.496595 31499 thread.cc:527] Waited for 201000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:50.734760 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:51.496827 31499 thread.cc:527] Waited for 202000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:52.005782 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:52.497066 31499 thread.cc:527] Waited for 203000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:53.279140 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:53.497304 31499 thread.cc:527] Waited for 204000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:54.497500 31499 thread.cc:527] Waited for 205000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:54.557348 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:55.497743 31499 thread.cc:527] Waited for 206000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:55.838440 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:56.498005 31499 thread.cc:527] Waited for 207000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:57.125922 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:57.498267 31499 thread.cc:527] Waited for 208000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:58.418238 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:12:58.498529 31499 thread.cc:527] Waited for 209000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:59.498804 31499 thread.cc:527] Waited for 210000ms trying to join with rpc worker (tid 2227)
W20251025 14:12:59.711589 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:00.499069 31499 thread.cc:527] Waited for 211000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:01.010004 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:01.499305 31499 thread.cc:527] Waited for 212000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:02.316723 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:02.499541 31499 thread.cc:527] Waited for 213000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:03.499800 31499 thread.cc:527] Waited for 214000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:03.622692 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:04.500048 31499 thread.cc:527] Waited for 215000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:04.930743 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:05.500274 31499 thread.cc:527] Waited for 216000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:06.249028 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:06.500531 31499 thread.cc:527] Waited for 217000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:07.500792 31499 thread.cc:527] Waited for 218000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:07.566679 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:08.501044 31499 thread.cc:527] Waited for 219000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:08.888417 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:09.501300 31499 thread.cc:527] Waited for 220000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:10.216427 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:10.501535 31499 thread.cc:527] Waited for 221000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:11.501793 31499 thread.cc:527] Waited for 222000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:11.547026 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:12.502044 31499 thread.cc:527] Waited for 223000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:12.881844 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:13.502286 31499 thread.cc:527] Waited for 224000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:14.220295 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:14.502529 31499 thread.cc:527] Waited for 225000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:15.502836 31499 thread.cc:527] Waited for 226000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:15.564802 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:16.503067 31499 thread.cc:527] Waited for 227000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:16.913398 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:17.503288 31499 thread.cc:527] Waited for 228000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:18.267884 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:18.503522 31499 thread.cc:527] Waited for 229000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:19.503770 31499 thread.cc:527] Waited for 230000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:19.622309 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:20.504000 31499 thread.cc:527] Waited for 231000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:20.978540 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:21.504231 31499 thread.cc:527] Waited for 232000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:22.341704 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:22.504473 31499 thread.cc:527] Waited for 233000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:23.504720 31499 thread.cc:527] Waited for 234000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:23.706930 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:24.504959 31499 thread.cc:527] Waited for 235000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:25.077522 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:25.505241 31499 thread.cc:527] Waited for 236000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:26.448959 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:26.505511 31499 thread.cc:527] Waited for 237000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:27.505712 31499 thread.cc:527] Waited for 238000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:27.830962 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:28.505949 31499 thread.cc:527] Waited for 239000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:29.216665 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:29.506196 31499 thread.cc:527] Waited for 240000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:30.506393 31499 thread.cc:527] Waited for 241000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:30.601138 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:31.506582 31499 thread.cc:527] Waited for 242000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:31.992070 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:32.506794 31499 thread.cc:527] Waited for 243000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:33.387301 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:33.506991 31499 thread.cc:527] Waited for 244000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:34.507207 31499 thread.cc:527] Waited for 245000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:34.785195 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:35.507469 31499 thread.cc:527] Waited for 246000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:36.188198 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:36.507711 31499 thread.cc:527] Waited for 247000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:37.507947 31499 thread.cc:527] Waited for 248000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:37.595459 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:38.508173 31499 thread.cc:527] Waited for 249000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:39.003578 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:39.508394 31499 thread.cc:527] Waited for 250000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:40.419524 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:40.508633 31499 thread.cc:527] Waited for 251000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:41.508872 31499 thread.cc:527] Waited for 252000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:41.837383 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:42.509203 31499 thread.cc:527] Waited for 253000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:43.263955 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:43.509454 31499 thread.cc:527] Waited for 254000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:44.509722 31499 thread.cc:527] Waited for 255000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:44.691097 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:45.509945 31499 thread.cc:527] Waited for 256000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:46.120048 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:46.510186 31499 thread.cc:527] Waited for 257000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:47.510416 31499 thread.cc:527] Waited for 258000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:47.553685 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:48.511194 31499 thread.cc:527] Waited for 259000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:48.993537 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:49.511435 31499 thread.cc:527] Waited for 260000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:50.437233 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:50.511659 31499 thread.cc:527] Waited for 261000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:51.511883 31499 thread.cc:527] Waited for 262000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:51.886739 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:52.512117 31499 thread.cc:527] Waited for 263000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:53.337249 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:53.512351 31499 thread.cc:527] Waited for 264000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:54.512588 31499 thread.cc:527] Waited for 265000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:54.792501 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:55.512832 31499 thread.cc:527] Waited for 266000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:56.248593 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:56.513092 31499 thread.cc:527] Waited for 267000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:57.513302 31499 thread.cc:527] Waited for 268000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:57.713503 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:58.513521 31499 thread.cc:527] Waited for 269000ms trying to join with rpc worker (tid 2227)
W20251025 14:13:59.181423 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:13:59.513787 31499 thread.cc:527] Waited for 270000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:00.514048 31499 thread.cc:527] Waited for 271000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:00.650319 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:01.514303 31499 thread.cc:527] Waited for 272000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:02.129053 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:02.514508 31499 thread.cc:527] Waited for 273000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:03.514755 31499 thread.cc:527] Waited for 274000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:03.608397 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:04.514974 31499 thread.cc:527] Waited for 275000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:05.091984 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:05.515197 31499 thread.cc:527] Waited for 276000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:06.515431 31499 thread.cc:527] Waited for 277000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:06.577948 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:07.515650 31499 thread.cc:527] Waited for 278000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:08.069561 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:08.515856 31499 thread.cc:527] Waited for 279000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:09.516088 31499 thread.cc:527] Waited for 280000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:09.562023 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:10.516297 31499 thread.cc:527] Waited for 281000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:11.059479 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:11.516503 31499 thread.cc:527] Waited for 282000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:12.516749 31499 thread.cc:527] Waited for 283000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:12.565953 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:13.517010 31499 thread.cc:527] Waited for 284000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:14.076812 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:14.517227 31499 thread.cc:527] Waited for 285000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:15.517457 31499 thread.cc:527] Waited for 286000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:15.584676 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:16.517707 31499 thread.cc:527] Waited for 287000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:17.095532 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:17.518035 31499 thread.cc:527] Waited for 288000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:18.518297 31499 thread.cc:527] Waited for 289000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:18.615137 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:19.518513 31499 thread.cc:527] Waited for 290000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:20.137537 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:20.518726 31499 thread.cc:527] Waited for 291000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:21.518947 31499 thread.cc:527] Waited for 292000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:21.662278 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:22.519153 31499 thread.cc:527] Waited for 293000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:23.191727 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:23.519344 31499 thread.cc:527] Waited for 294000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:24.519536 31499 thread.cc:527] Waited for 295000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:24.726292 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:25.519767 31499 thread.cc:527] Waited for 296000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:26.267235 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:26.520006 31499 thread.cc:527] Waited for 297000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:27.520237 31499 thread.cc:527] Waited for 298000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:27.810178 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:28.520459 31499 thread.cc:527] Waited for 299000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:29.357872 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:29.520678 31499 thread.cc:527] Waited for 300000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:30.520905 31499 thread.cc:527] Waited for 301000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:30.910276 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:31.521140 31499 thread.cc:527] Waited for 302000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:32.464681 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:32.521335 31499 thread.cc:527] Waited for 303000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:33.521540 31499 thread.cc:527] Waited for 304000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:34.022382 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:34.521752 31499 thread.cc:527] Waited for 305000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:35.521984 31499 thread.cc:527] Waited for 306000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:35.587875 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:36.522202 31499 thread.cc:527] Waited for 307000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:37.151698 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:37.522418 31499 thread.cc:527] Waited for 308000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:38.522637 31499 thread.cc:527] Waited for 309000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:38.720218 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:39.522836 31499 thread.cc:527] Waited for 310000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:40.295639 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:40.523016 31499 thread.cc:527] Waited for 311000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:41.523211 31499 thread.cc:527] Waited for 312000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:41.873339 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:42.523432 31499 thread.cc:527] Waited for 313000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:43.457084 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:43.523644 31499 thread.cc:527] Waited for 314000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:44.523870 31499 thread.cc:527] Waited for 315000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:45.041783 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:45.524086 31499 thread.cc:527] Waited for 316000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:46.524298 31499 thread.cc:527] Waited for 317000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:46.631429 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:47.524507 31499 thread.cc:527] Waited for 318000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:48.221967 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:48.524719 31499 thread.cc:527] Waited for 319000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:49.524914 31499 thread.cc:527] Waited for 320000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:49.821410 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:50.525139 31499 thread.cc:527] Waited for 321000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:51.421828 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:51.525344 31499 thread.cc:527] Waited for 322000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:52.525566 31499 thread.cc:527] Waited for 323000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:53.030592 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:53.525760 31499 thread.cc:527] Waited for 324000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:54.525959 31499 thread.cc:527] Waited for 325000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:54.640169 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:55.526157 31499 thread.cc:527] Waited for 326000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:56.256716 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:56.526364 31499 thread.cc:527] Waited for 327000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:57.526556 31499 thread.cc:527] Waited for 328000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:57.872267 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:58.526758 31499 thread.cc:527] Waited for 329000ms trying to join with rpc worker (tid 2227)
W20251025 14:14:59.496884 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:14:59.526958 31499 thread.cc:527] Waited for 330000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:00.527168 31499 thread.cc:527] Waited for 331000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:01.124374 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:01.527364 31499 thread.cc:527] Waited for 332000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:02.527557 31499 thread.cc:527] Waited for 333000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:02.752663 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:03.527783 31499 thread.cc:527] Waited for 334000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:04.388465 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:04.527997 31499 thread.cc:527] Waited for 335000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:05.528221 31499 thread.cc:527] Waited for 336000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:06.027992 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:06.528429 31499 thread.cc:527] Waited for 337000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:07.528635 31499 thread.cc:527] Waited for 338000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:07.671523 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:08.528864 31499 thread.cc:527] Waited for 339000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:09.314155 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:09.529083 31499 thread.cc:527] Waited for 340000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:10.529296 31499 thread.cc:527] Waited for 341000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:10.963644 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:11.529495 31499 thread.cc:527] Waited for 342000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:12.529718 31499 thread.cc:527] Waited for 343000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:12.619343 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:13.529955 31499 thread.cc:527] Waited for 344000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:14.279088 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:14.530202 31499 thread.cc:527] Waited for 345000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:15.530432 31499 thread.cc:527] Waited for 346000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:15.941833 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:16.530645 31499 thread.cc:527] Waited for 347000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:17.530861 31499 thread.cc:527] Waited for 348000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:17.610584 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:18.531100 31499 thread.cc:527] Waited for 349000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:19.278476 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:19.531339 31499 thread.cc:527] Waited for 350000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:20.531577 31499 thread.cc:527] Waited for 351000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:20.954906 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:21.531797 31499 thread.cc:527] Waited for 352000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:22.532013 31499 thread.cc:527] Waited for 353000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:22.635603 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:23.532244 31499 thread.cc:527] Waited for 354000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:24.315250 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:24.532472 31499 thread.cc:527] Waited for 355000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:25.532681 31499 thread.cc:527] Waited for 356000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:25.999876 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:26.532931 31499 thread.cc:527] Waited for 357000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:27.533231 31499 thread.cc:527] Waited for 358000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:27.690619 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:28.533457 31499 thread.cc:527] Waited for 359000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:29.386138 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:29.533689 31499 thread.cc:527] Waited for 360000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:30.533915 31499 thread.cc:527] Waited for 361000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:31.084872 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:31.534162 31499 thread.cc:527] Waited for 362000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:32.534394 31499 thread.cc:527] Waited for 363000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:32.786600 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:33.534611 31499 thread.cc:527] Waited for 364000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:34.494235 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:34.534850 31499 thread.cc:527] Waited for 365000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:35.535089 31499 thread.cc:527] Waited for 366000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:36.207031 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:36.535297 31499 thread.cc:527] Waited for 367000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:37.535503 31499 thread.cc:527] Waited for 368000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:37.923781 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:38.535725 31499 thread.cc:527] Waited for 369000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:39.535938 31499 thread.cc:527] Waited for 370000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:39.642627 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:40.536162 31499 thread.cc:527] Waited for 371000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:41.362448 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:41.536386 31499 thread.cc:527] Waited for 372000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:42.536598 31499 thread.cc:527] Waited for 373000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:43.092131 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:43.536818 31499 thread.cc:527] Waited for 374000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:44.537025 31499 thread.cc:527] Waited for 375000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:44.824047 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:45.537241 31499 thread.cc:527] Waited for 376000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:46.537480 31499 thread.cc:527] Waited for 377000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:46.559048 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:47.537714 31499 thread.cc:527] Waited for 378000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:48.297739 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:48.537956 31499 thread.cc:527] Waited for 379000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:49.538172 31499 thread.cc:527] Waited for 380000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:50.042486 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:50.538419 31499 thread.cc:527] Waited for 381000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:51.538656 31499 thread.cc:527] Waited for 382000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:51.792452 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:52.538903 31499 thread.cc:527] Waited for 383000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:53.539137 31499 thread.cc:527] Waited for 384000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:53.542385 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:54.539359 31499 thread.cc:527] Waited for 385000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:55.297027 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:55.539630 31499 thread.cc:527] Waited for 386000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:56.539852 31499 thread.cc:527] Waited for 387000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:57.052835 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:57.540081 31499 thread.cc:527] Waited for 388000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:58.540323 31499 thread.cc:527] Waited for 389000ms trying to join with rpc worker (tid 2227)
W20251025 14:15:58.815758 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:15:59.540535 31499 thread.cc:527] Waited for 390000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:00.540750 31499 thread.cc:527] Waited for 391000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:00.582543 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:01.541019 31499 thread.cc:527] Waited for 392000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:02.355501 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:02.541273 31499 thread.cc:527] Waited for 393000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:03.541471 31499 thread.cc:527] Waited for 394000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:04.129484 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:04.541669 31499 thread.cc:527] Waited for 395000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:05.541884 31499 thread.cc:527] Waited for 396000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:05.909332 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:06.542189 31499 thread.cc:527] Waited for 397000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:07.542651 31499 thread.cc:527] Waited for 398000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:07.690256 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:08.542909 31499 thread.cc:527] Waited for 399000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:09.474372 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:09.543135 31499 thread.cc:527] Waited for 400000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:10.543437 31499 thread.cc:527] Waited for 401000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:11.267334 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:11.543658 31499 thread.cc:527] Waited for 402000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:12.543895 31499 thread.cc:527] Waited for 403000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:13.062244 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:13.544090 31499 thread.cc:527] Waited for 404000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:14.544313 31499 thread.cc:527] Waited for 405000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:14.861254 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:15.544545 31499 thread.cc:527] Waited for 406000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:16.544780 31499 thread.cc:527] Waited for 407000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:16.665138 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:17.545050 31499 thread.cc:527] Waited for 408000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:18.472033 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:18.545311 31499 thread.cc:527] Waited for 409000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:19.545532 31499 thread.cc:527] Waited for 410000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:20.282915 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:20.545759 31499 thread.cc:527] Waited for 411000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:21.545972 31499 thread.cc:527] Waited for 412000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:22.099043 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:22.546195 31499 thread.cc:527] Waited for 413000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:23.546450 31499 thread.cc:527] Waited for 414000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:23.919865 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:24.546710 31499 thread.cc:527] Waited for 415000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:25.546963 31499 thread.cc:527] Waited for 416000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:25.741901 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:26.547200 31499 thread.cc:527] Waited for 417000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:27.547418 31499 thread.cc:527] Waited for 418000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:27.567713 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:28.547633 31499 thread.cc:527] Waited for 419000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:29.396404 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:29.547868 31499 thread.cc:527] Waited for 420000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:30.548118 31499 thread.cc:527] Waited for 421000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:31.231249 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:31.548331 31499 thread.cc:527] Waited for 422000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:32.548533 31499 thread.cc:527] Waited for 423000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:33.067862 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:33.548734 31499 thread.cc:527] Waited for 424000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:34.548942 31499 thread.cc:527] Waited for 425000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:34.912415 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:35.549307 31499 thread.cc:527] Waited for 426000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:36.549597 31499 thread.cc:527] Waited for 427000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:36.763332 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:37.549829 31499 thread.cc:527] Waited for 428000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:38.550055 31499 thread.cc:527] Waited for 429000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:38.617139 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:39.550266 31499 thread.cc:527] Waited for 430000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:40.474506 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:40.550486 31499 thread.cc:527] Waited for 431000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:41.550719 31499 thread.cc:527] Waited for 432000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:42.333614 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:42.550940 31499 thread.cc:527] Waited for 433000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:43.551164 31499 thread.cc:527] Waited for 434000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:44.199613 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:44.551385 31499 thread.cc:527] Waited for 435000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:45.551613 31499 thread.cc:527] Waited for 436000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:46.068775 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:46.551843 31499 thread.cc:527] Waited for 437000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:47.552078 31499 thread.cc:527] Waited for 438000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:47.942976 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:48.552306 31499 thread.cc:527] Waited for 439000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:49.552523 31499 thread.cc:527] Waited for 440000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:49.816859 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:50.552752 31499 thread.cc:527] Waited for 441000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:51.553035 31499 thread.cc:527] Waited for 442000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:51.696965 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:52.553256 31499 thread.cc:527] Waited for 443000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:53.553491 31499 thread.cc:527] Waited for 444000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:53.579039 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:54.553705 31499 thread.cc:527] Waited for 445000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:55.464931 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:55.553916 31499 thread.cc:527] Waited for 446000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:56.554133 31499 thread.cc:527] Waited for 447000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:57.355039 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:57.554345 31499 thread.cc:527] Waited for 448000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:58.554540 31499 thread.cc:527] Waited for 449000ms trying to join with rpc worker (tid 2227)
W20251025 14:16:59.247004 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:16:59.554772 31499 thread.cc:527] Waited for 450000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:00.554991 31499 thread.cc:527] Waited for 451000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:01.146114 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:01.555209 31499 thread.cc:527] Waited for 452000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:02.555469 31499 thread.cc:527] Waited for 453000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:03.046154 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:03.555716 31499 thread.cc:527] Waited for 454000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:04.555974 31499 thread.cc:527] Waited for 455000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:04.951299 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:05.556200 31499 thread.cc:527] Waited for 456000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:06.556408 31499 thread.cc:527] Waited for 457000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:06.860423 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:07.556654 31499 thread.cc:527] Waited for 458000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:08.556875 31499 thread.cc:527] Waited for 459000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:08.777642 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:09.557103 31499 thread.cc:527] Waited for 460000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:10.557350 31499 thread.cc:527] Waited for 461000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:10.695734 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:11.557574 31499 thread.cc:527] Waited for 462000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:12.557821 31499 thread.cc:527] Waited for 463000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:12.616796 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:13.558090 31499 thread.cc:527] Waited for 464000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:14.542901 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:14.558331 31499 thread.cc:527] Waited for 465000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:15.558542 31499 thread.cc:527] Waited for 466000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:16.474645 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:16.558769 31499 thread.cc:527] Waited for 467000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:17.559002 31499 thread.cc:527] Waited for 468000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:18.408546 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:18.559247 31499 thread.cc:527] Waited for 469000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:19.559468 31499 thread.cc:527] Waited for 470000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:20.343614 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:20.559710 31499 thread.cc:527] Waited for 471000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:21.559909 31499 thread.cc:527] Waited for 472000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:22.285660 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:22.560138 31499 thread.cc:527] Waited for 473000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:23.560369 31499 thread.cc:527] Waited for 474000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:24.228719 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:24.560601 31499 thread.cc:527] Waited for 475000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:25.560837 31499 thread.cc:527] Waited for 476000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:26.182915 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:26.561089 31499 thread.cc:527] Waited for 477000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:27.561304 31499 thread.cc:527] Waited for 478000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:28.136401 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:28.561491 31499 thread.cc:527] Waited for 479000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:29.561753 31499 thread.cc:527] Waited for 480000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:30.097606 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:30.562007 31499 thread.cc:527] Waited for 481000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:31.562261 31499 thread.cc:527] Waited for 482000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:32.061671 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:32.562497 31499 thread.cc:527] Waited for 483000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:33.562726 31499 thread.cc:527] Waited for 484000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:34.029052 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:34.562981 31499 thread.cc:527] Waited for 485000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:35.563249 31499 thread.cc:527] Waited for 486000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:36.000061 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:36.563506 31499 thread.cc:527] Waited for 487000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:37.563748 31499 thread.cc:527] Waited for 488000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:37.977332 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:38.563946 31499 thread.cc:527] Waited for 489000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:39.564208 31499 thread.cc:527] Waited for 490000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:39.954576 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:40.564452 31499 thread.cc:527] Waited for 491000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:41.564663 31499 thread.cc:527] Waited for 492000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:41.938691 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:42.564904 31499 thread.cc:527] Waited for 493000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:43.565176 31499 thread.cc:527] Waited for 494000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:43.926002 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:44.565408 31499 thread.cc:527] Waited for 495000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:45.565730 31499 thread.cc:527] Waited for 496000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:45.918392 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:46.565989 31499 thread.cc:527] Waited for 497000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:47.566227 31499 thread.cc:527] Waited for 498000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:47.917577 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:48.566418 31499 thread.cc:527] Waited for 499000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:49.566619 31499 thread.cc:527] Waited for 500000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:49.917668 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:50.566862 31499 thread.cc:527] Waited for 501000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:50.920938 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:51.567101 31499 thread.cc:527] Waited for 502000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:51.922132 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:52.567346 31499 thread.cc:527] Waited for 503000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:52.927202 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:53.567574 31499 thread.cc:527] Waited for 504000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:53.930089 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:54.567822 31499 thread.cc:527] Waited for 505000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:54.936231 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:55.568059 31499 thread.cc:527] Waited for 506000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:55.943348 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:56.568267 31499 thread.cc:527] Waited for 507000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:56.949564 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:57.568485 31499 thread.cc:527] Waited for 508000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:57.959640 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:58.568717 31499 thread.cc:527] Waited for 509000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:58.968771 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:17:59.568974 31499 thread.cc:527] Waited for 510000ms trying to join with rpc worker (tid 2227)
W20251025 14:17:59.981006 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:00.569250 31499 thread.cc:527] Waited for 511000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:00.992246 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:01.569490 31499 thread.cc:527] Waited for 512000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:02.005740 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:02.569718 31499 thread.cc:527] Waited for 513000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:03.020082 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:03.569939 31499 thread.cc:527] Waited for 514000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:04.032397 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:04.570154 31499 thread.cc:527] Waited for 515000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:05.049536 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:05.570371 31499 thread.cc:527] Waited for 516000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:06.067654 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:06.570595 31499 thread.cc:527] Waited for 517000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:07.086086 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:07.570814 31499 thread.cc:527] Waited for 518000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:08.102319 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:08.571028 31499 thread.cc:527] Waited for 519000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:09.121601 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:09.571245 31499 thread.cc:527] Waited for 520000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:10.141816 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:10.571460 31499 thread.cc:527] Waited for 521000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:11.161083 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:11.571678 31499 thread.cc:527] Waited for 522000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:12.184459 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:12.571893 31499 thread.cc:527] Waited for 523000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:13.206606 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:13.572108 31499 thread.cc:527] Waited for 524000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:14.231936 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:14.572324 31499 thread.cc:527] Waited for 525000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:15.259071 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:15.572646 31499 thread.cc:527] Waited for 526000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:16.283295 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:16.572845 31499 thread.cc:527] Waited for 527000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:17.311689 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:17.573091 31499 thread.cc:527] Waited for 528000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:18.341138 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:18.573295 31499 thread.cc:527] Waited for 529000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:19.372692 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:19.573472 31499 thread.cc:527] Waited for 530000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:20.401841 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:20.573654 31499 thread.cc:527] Waited for 531000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:21.432472 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:21.573850 31499 thread.cc:527] Waited for 532000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:22.462760 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:22.574043 31499 thread.cc:527] Waited for 533000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:23.495950 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:23.574327 31499 thread.cc:527] Waited for 534000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:24.532306 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:24.574573 31499 thread.cc:527] Waited for 535000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:25.566610 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:25.574769 31499 thread.cc:527] Waited for 536000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:26.574937 31499 thread.cc:527] Waited for 537000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:26.602824 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:27.575139 31499 thread.cc:527] Waited for 538000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:27.641449 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:28.575350 31499 thread.cc:527] Waited for 539000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:28.680800 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:29.575556 31499 thread.cc:527] Waited for 540000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:29.718396 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:30.575721 31499 thread.cc:527] Waited for 541000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:30.758581 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:31.575917 31499 thread.cc:527] Waited for 542000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:31.801576 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:32.576099 31499 thread.cc:527] Waited for 543000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:32.844961 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:33.576279 31499 thread.cc:527] Waited for 544000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:33.886179 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:34.576476 31499 thread.cc:527] Waited for 545000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:34.932386 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:35.576709 31499 thread.cc:527] Waited for 546000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:35.976679 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:36.576968 31499 thread.cc:527] Waited for 547000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:37.023186 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:37.577255 31499 thread.cc:527] Waited for 548000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:38.071401 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:38.577462 31499 thread.cc:527] Waited for 549000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:39.118680 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:39.577685 31499 thread.cc:527] Waited for 550000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:40.167948 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:40.577908 31499 thread.cc:527] Waited for 551000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:41.216192 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:41.578138 31499 thread.cc:527] Waited for 552000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:42.268574 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:42.578395 31499 thread.cc:527] Waited for 553000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:43.322959 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:43.578613 31499 thread.cc:527] Waited for 554000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:44.378252 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:44.578837 31499 thread.cc:527] Waited for 555000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:45.431546 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:45.579080 31499 thread.cc:527] Waited for 556000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:46.488632 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:46.579308 31499 thread.cc:527] Waited for 557000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:47.542824 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:47.579548 31499 thread.cc:527] Waited for 558000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:48.579794 31499 thread.cc:527] Waited for 559000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:48.599277 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:49.580021 31499 thread.cc:527] Waited for 560000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:49.659592 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:50.580267 31499 thread.cc:527] Waited for 561000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:50.720901 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:51.580508 31499 thread.cc:527] Waited for 562000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:51.781134 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:52.580741 31499 thread.cc:527] Waited for 563000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:52.840626 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:53.581027 31499 thread.cc:527] Waited for 564000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:53.903839 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:54.581269 31499 thread.cc:527] Waited for 565000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:54.967265 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:55.581516 31499 thread.cc:527] Waited for 566000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:56.033748 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:56.581765 31499 thread.cc:527] Waited for 567000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:57.101030 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:57.581988 31499 thread.cc:527] Waited for 568000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:58.165370 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:58.582226 31499 thread.cc:527] Waited for 569000ms trying to join with rpc worker (tid 2227)
W20251025 14:18:59.232149 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:18:59.582520 31499 thread.cc:527] Waited for 570000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:00.302163 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:00.582757 31499 thread.cc:527] Waited for 571000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:01.369400 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:01.582989 31499 thread.cc:527] Waited for 572000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:02.440588 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:02.583314 31499 thread.cc:527] Waited for 573000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:03.512717 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:03.583564 31499 thread.cc:527] Waited for 574000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:04.583105 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:04.583760 31499 thread.cc:527] Waited for 575000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:05.584012 31499 thread.cc:527] Waited for 576000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:05.658469 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:06.584260 31499 thread.cc:527] Waited for 577000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:06.733860 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:07.584487 31499 thread.cc:527] Waited for 578000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:07.808267 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:08.584743 31499 thread.cc:527] Waited for 579000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:08.884572 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:09.584981 31499 thread.cc:527] Waited for 580000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:09.961835 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:10.585258 31499 thread.cc:527] Waited for 581000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:11.038113 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:11.585516 31499 thread.cc:527] Waited for 582000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:12.118367 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:12.585745 31499 thread.cc:527] Waited for 583000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:13.200495 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:13.585988 31499 thread.cc:527] Waited for 584000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:14.282191 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:14.586216 31499 thread.cc:527] Waited for 585000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:15.365562 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:15.586433 31499 thread.cc:527] Waited for 586000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:16.447746 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:16.586694 31499 thread.cc:527] Waited for 587000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:17.531183 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:17.586949 31499 thread.cc:527] Waited for 588000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:18.587203 31499 thread.cc:527] Waited for 589000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:18.615594 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:19.587440 31499 thread.cc:527] Waited for 590000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:19.701817 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:20.587689 31499 thread.cc:527] Waited for 591000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:20.791532 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:21.587921 31499 thread.cc:527] Waited for 592000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:21.881572 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:22.588193 31499 thread.cc:527] Waited for 593000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:22.971072 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:23.588455 31499 thread.cc:527] Waited for 594000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:24.059257 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:24.588665 31499 thread.cc:527] Waited for 595000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:25.149569 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:25.588867 31499 thread.cc:527] Waited for 596000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:26.241829 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:26.589067 31499 thread.cc:527] Waited for 597000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:27.336266 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:27.589267 31499 thread.cc:527] Waited for 598000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:28.428579 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:28.589463 31499 thread.cc:527] Waited for 599000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:29.525971 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:29.589674 31499 thread.cc:527] Waited for 600000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:30.589923 31499 thread.cc:527] Waited for 601000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:30.623273 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:31.590170 31499 thread.cc:527] Waited for 602000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:31.718757 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:32.590406 31499 thread.cc:527] Waited for 603000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:32.816231 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:33.590629 31499 thread.cc:527] Waited for 604000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:33.917584 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:34.590833 31499 thread.cc:527] Waited for 605000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:35.017017 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:35.591051 31499 thread.cc:527] Waited for 606000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:36.117398 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:36.591271 31499 thread.cc:527] Waited for 607000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:37.219048 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:37.591516 31499 thread.cc:527] Waited for 608000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:38.323673 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:38.591742 31499 thread.cc:527] Waited for 609000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:39.426196 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:39.591997 31499 thread.cc:527] Waited for 610000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:40.531759 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:40.592235 31499 thread.cc:527] Waited for 611000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:41.592459 31499 thread.cc:527] Waited for 612000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:41.640177 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:42.592670 31499 thread.cc:527] Waited for 613000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:42.747403 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:43.592924 31499 thread.cc:527] Waited for 614000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:43.853813 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:44.593191 31499 thread.cc:527] Waited for 615000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:44.962149 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:45.593408 31499 thread.cc:527] Waited for 616000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:46.072618 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:46.593614 31499 thread.cc:527] Waited for 617000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:47.183109 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:47.593845 31499 thread.cc:527] Waited for 618000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:48.297501 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:48.594038 31499 thread.cc:527] Waited for 619000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:49.408926 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:49.594262 31499 thread.cc:527] Waited for 620000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:50.525331 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:50.594465 31499 thread.cc:527] Waited for 621000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:51.594695 31499 thread.cc:527] Waited for 622000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:51.638676 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:52.594897 31499 thread.cc:527] Waited for 623000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:52.753953 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:53.595108 31499 thread.cc:527] Waited for 624000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:53.871387 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:54.595327 31499 thread.cc:527] Waited for 625000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:54.988824 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:55.595539 31499 thread.cc:527] Waited for 626000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:56.106128 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:56.595746 31499 thread.cc:527] Waited for 627000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:57.228622 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:57.595969 31499 thread.cc:527] Waited for 628000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:58.348878 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:58.596189 31499 thread.cc:527] Waited for 629000ms trying to join with rpc worker (tid 2227)
W20251025 14:19:59.472337 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:19:59.596426 31499 thread.cc:527] Waited for 630000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:00.596657 31499 thread.cc:527] Waited for 631000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:00.596730 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:01.596917 31499 thread.cc:527] Waited for 632000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:01.719131 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:02.597168 31499 thread.cc:527] Waited for 633000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:02.845492 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:03.597397 31499 thread.cc:527] Waited for 634000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:03.973778 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:04.597635 31499 thread.cc:527] Waited for 635000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:05.099130 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:05.597889 31499 thread.cc:527] Waited for 636000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:06.229628 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:06.598126 31499 thread.cc:527] Waited for 637000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:07.357025 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:07.598357 31499 thread.cc:527] Waited for 638000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:08.486290 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:08.598604 31499 thread.cc:527] Waited for 639000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:09.598838 31499 thread.cc:527] Waited for 640000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:09.617780 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:10.599117 31499 thread.cc:527] Waited for 641000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:10.750206 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:11.599362 31499 thread.cc:527] Waited for 642000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:11.881589 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:12.599591 31499 thread.cc:527] Waited for 643000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:13.015967 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:13.599810 31499 thread.cc:527] Waited for 644000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:14.151254 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:14.600054 31499 thread.cc:527] Waited for 645000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:15.287614 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:15.600283 31499 thread.cc:527] Waited for 646000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:16.424065 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:16.600538 31499 thread.cc:527] Waited for 647000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:17.562450 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:17.600785 31499 thread.cc:527] Waited for 648000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:18.601023 31499 thread.cc:527] Waited for 649000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:18.701807 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:19.601271 31499 thread.cc:527] Waited for 650000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:19.843325 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:20.601506 31499 thread.cc:527] Waited for 651000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:20.986570 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:21.601722 31499 thread.cc:527] Waited for 652000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:22.131048 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:22.601933 31499 thread.cc:527] Waited for 653000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:23.274349 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:23.602167 31499 thread.cc:527] Waited for 654000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:24.418841 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:24.602396 31499 thread.cc:527] Waited for 655000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:25.565114 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:25.602604 31499 thread.cc:527] Waited for 656000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:26.602844 31499 thread.cc:527] Waited for 657000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:26.713413 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:27.603078 31499 thread.cc:527] Waited for 658000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:27.858834 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:28.603318 31499 thread.cc:527] Waited for 659000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:29.007026 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:29.603529 31499 thread.cc:527] Waited for 660000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:30.155417 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:30.603739 31499 thread.cc:527] Waited for 661000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:31.306598 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:31.603962 31499 thread.cc:527] Waited for 662000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:32.458287 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:32.604228 31499 thread.cc:527] Waited for 663000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:33.604458 31499 thread.cc:527] Waited for 664000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:33.612573 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:34.604679 31499 thread.cc:527] Waited for 665000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:34.765031 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:35.604911 31499 thread.cc:527] Waited for 666000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:35.921368 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:36.605201 31499 thread.cc:527] Waited for 667000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:37.075784 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:37.605453 31499 thread.cc:527] Waited for 668000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:38.232363 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:38.605671 31499 thread.cc:527] Waited for 669000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:39.390594 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:39.605855 31499 thread.cc:527] Waited for 670000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:40.550007 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:40.606078 31499 thread.cc:527] Waited for 671000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:41.606284 31499 thread.cc:527] Waited for 672000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:41.708196 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:42.606498 31499 thread.cc:527] Waited for 673000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:42.870733 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:43.606688 31499 thread.cc:527] Waited for 674000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:44.034262 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:44.606873 31499 thread.cc:527] Waited for 675000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:45.194653 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:45.607060 31499 thread.cc:527] Waited for 676000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:46.359148 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:46.607249 31499 thread.cc:527] Waited for 677000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:47.525599 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:47.607450 31499 thread.cc:527] Waited for 678000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:48.607669 31499 thread.cc:527] Waited for 679000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:48.691025 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:49.607939 31499 thread.cc:527] Waited for 680000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:49.858016 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:50.608158 31499 thread.cc:527] Waited for 681000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:51.027289 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:51.608373 31499 thread.cc:527] Waited for 682000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:52.193578 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:52.608608 31499 thread.cc:527] Waited for 683000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:53.365173 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:53.608793 31499 thread.cc:527] Waited for 684000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:54.534695 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:54.609027 31499 thread.cc:527] Waited for 685000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:55.609233 31499 thread.cc:527] Waited for 686000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:55.707027 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:56.609436 31499 thread.cc:527] Waited for 687000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:56.877758 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:57.609660 31499 thread.cc:527] Waited for 688000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:58.051244 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:58.609887 31499 thread.cc:527] Waited for 689000ms trying to join with rpc worker (tid 2227)
W20251025 14:20:59.224921 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:20:59.610127 31499 thread.cc:527] Waited for 690000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:00.401289 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:00.610332 31499 thread.cc:527] Waited for 691000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:01.578804 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:01.610548 31499 thread.cc:527] Waited for 692000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:02.610746 31499 thread.cc:527] Waited for 693000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:02.757854 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:03.610960 31499 thread.cc:527] Waited for 694000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:03.933538 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:04.611186 31499 thread.cc:527] Waited for 695000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:05.110134 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:05.611394 31499 thread.cc:527] Waited for 696000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:06.287768 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:06.611609 31499 thread.cc:527] Waited for 697000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:07.470383 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:07.611819 31499 thread.cc:527] Waited for 698000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:08.612003 31499 thread.cc:527] Waited for 699000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:08.654728 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:09.612238 31499 thread.cc:527] Waited for 700000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:09.839802 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:10.612465 31499 thread.cc:527] Waited for 701000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:11.021405 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:11.612684 31499 thread.cc:527] Waited for 702000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:12.207286 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:12.612913 31499 thread.cc:527] Waited for 703000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:13.392925 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:13.613171 31499 thread.cc:527] Waited for 704000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:14.580363 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:14.613404 31499 thread.cc:527] Waited for 705000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:15.613626 31499 thread.cc:527] Waited for 706000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:15.767875 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:16.613891 31499 thread.cc:527] Waited for 707000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:16.959409 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:17.614109 31499 thread.cc:527] Waited for 708000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:18.147166 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:18.614326 31499 thread.cc:527] Waited for 709000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:19.337749 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:19.614549 31499 thread.cc:527] Waited for 710000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:20.529239 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:20.614773 31499 thread.cc:527] Waited for 711000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:21.614974 31499 thread.cc:527] Waited for 712000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:21.722340 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:22.615177 31499 thread.cc:527] Waited for 713000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:22.914944 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:23.615374 31499 thread.cc:527] Waited for 714000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:24.110112 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:24.615563 31499 thread.cc:527] Waited for 715000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:25.308552 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:25.615765 31499 thread.cc:527] Waited for 716000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:26.504127 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:26.615989 31499 thread.cc:527] Waited for 717000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:27.616197 31499 thread.cc:527] Waited for 718000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:27.701704 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:28.616392 31499 thread.cc:527] Waited for 719000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:28.903313 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:29.616601 31499 thread.cc:527] Waited for 720000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:30.102702 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:30.616807 31499 thread.cc:527] Waited for 721000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:31.302163 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:31.617025 31499 thread.cc:527] Waited for 722000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:32.502012 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:32.617233 31499 thread.cc:527] Waited for 723000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:33.617441 31499 thread.cc:527] Waited for 724000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:33.705610 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:34.617657 31499 thread.cc:527] Waited for 725000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:34.909433 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:35.617854 31499 thread.cc:527] Waited for 726000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:36.114750 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:36.618064 31499 thread.cc:527] Waited for 727000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:37.321309 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:37.618279 31499 thread.cc:527] Waited for 728000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:38.526800 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:38.618479 31499 thread.cc:527] Waited for 729000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:39.618667 31499 thread.cc:527] Waited for 730000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:39.734277 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:40.618857 31499 thread.cc:527] Waited for 731000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:40.944589 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:41.619066 31499 thread.cc:527] Waited for 732000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:42.154102 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:42.619266 31499 thread.cc:527] Waited for 733000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:43.366432 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:43.619484 31499 thread.cc:527] Waited for 734000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:44.577435 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:44.619673 31499 thread.cc:527] Waited for 735000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:45.619866 31499 thread.cc:527] Waited for 736000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:45.790905 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:46.620067 31499 thread.cc:527] Waited for 737000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:47.003300 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:47.620280 31499 thread.cc:527] Waited for 738000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:48.218977 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:48.620487 31499 thread.cc:527] Waited for 739000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:49.433369 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:49.620708 31499 thread.cc:527] Waited for 740000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:50.620909 31499 thread.cc:527] Waited for 741000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:50.650935 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:51.621148 31499 thread.cc:527] Waited for 742000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:51.869324 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:52.621343 31499 thread.cc:527] Waited for 743000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:53.089869 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:53.621536 31499 thread.cc:527] Waited for 744000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:54.312413 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:54.621734 31499 thread.cc:527] Waited for 745000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:55.533926 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:55.621918 31499 thread.cc:527] Waited for 746000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:56.622087 31499 thread.cc:527] Waited for 747000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:56.757261 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:57.622278 31499 thread.cc:527] Waited for 748000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:57.980396 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:58.622587 31499 thread.cc:527] Waited for 749000ms trying to join with rpc worker (tid 2227)
W20251025 14:21:59.206853 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:21:59.622771 31499 thread.cc:527] Waited for 750000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:00.429819 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:00.622985 31499 thread.cc:527] Waited for 751000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:01.623185 31499 thread.cc:527] Waited for 752000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:01.654295 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:02.623394 31499 thread.cc:527] Waited for 753000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:02.879050 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:03.623591 31499 thread.cc:527] Waited for 754000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:04.105557 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:04.623809 31499 thread.cc:527] Waited for 755000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:05.333215 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:05.624023 31499 thread.cc:527] Waited for 756000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:06.563088 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:06.624217 31499 thread.cc:527] Waited for 757000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:07.624420 31499 thread.cc:527] Waited for 758000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:07.794643 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:08.624629 31499 thread.cc:527] Waited for 759000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:09.025395 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:09.624867 31499 thread.cc:527] Waited for 760000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:10.258103 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:10.625088 31499 thread.cc:527] Waited for 761000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:11.491619 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:11.625308 31499 thread.cc:527] Waited for 762000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:12.625522 31499 thread.cc:527] Waited for 763000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:12.726161 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:13.625754 31499 thread.cc:527] Waited for 764000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:13.961606 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:14.625986 31499 thread.cc:527] Waited for 765000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:15.198117 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:15.626217 31499 thread.cc:527] Waited for 766000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:16.435173 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:16.626441 31499 thread.cc:527] Waited for 767000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:17.626660 31499 thread.cc:527] Waited for 768000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:17.672705 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:18.626899 31499 thread.cc:527] Waited for 769000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:18.915189 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:19.627132 31499 thread.cc:527] Waited for 770000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:20.158635 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:20.627373 31499 thread.cc:527] Waited for 771000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:21.398286 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:21.627586 31499 thread.cc:527] Waited for 772000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:22.627795 31499 thread.cc:527] Waited for 773000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:22.640954 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:23.628012 31499 thread.cc:527] Waited for 774000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:23.884554 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:24.628214 31499 thread.cc:527] Waited for 775000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:25.128141 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:25.628427 31499 thread.cc:527] Waited for 776000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:26.375571 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:26.628645 31499 thread.cc:527] Waited for 777000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:27.623171 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:27.628847 31499 thread.cc:527] Waited for 778000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:28.629036 31499 thread.cc:527] Waited for 779000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:28.868765 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:29.629235 31499 thread.cc:527] Waited for 780000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:30.115358 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:30.629457 31499 thread.cc:527] Waited for 781000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:31.367141 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:31.629707 31499 thread.cc:527] Waited for 782000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:32.615968 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:32.629940 31499 thread.cc:527] Waited for 783000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:33.630146 31499 thread.cc:527] Waited for 784000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:33.866526 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:34.630381 31499 thread.cc:527] Waited for 785000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:35.119247 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:35.630622 31499 thread.cc:527] Waited for 786000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:36.375783 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:36.630878 31499 thread.cc:527] Waited for 787000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:37.629422 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:37.631088 31499 thread.cc:527] Waited for 788000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:38.631278 31499 thread.cc:527] Waited for 789000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:38.885843 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:39.631515 31499 thread.cc:527] Waited for 790000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:40.145368 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:40.631762 31499 thread.cc:527] Waited for 791000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:41.405848 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:41.632019 31499 thread.cc:527] Waited for 792000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:42.632263 31499 thread.cc:527] Waited for 793000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:42.662483 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:43.632483 31499 thread.cc:527] Waited for 794000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:43.923056 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:44.632705 31499 thread.cc:527] Waited for 795000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:45.183430 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:45.632946 31499 thread.cc:527] Waited for 796000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:46.445919 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:46.633191 31499 thread.cc:527] Waited for 797000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:47.633402 31499 thread.cc:527] Waited for 798000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:47.711452 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
W20251025 14:22:48.633636 31499 thread.cc:527] Waited for 799000ms trying to join with rpc worker (tid 2227)
W20251025 14:22:48.973554 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
************************ BEGIN STACKS **************************
[New LWP 31502]
[New LWP 31503]
[New LWP 2023]
[New LWP 2177]
[New LWP 2178]
[New LWP 2181]
[New LWP 2182]
[New LWP 2183]
[New LWP 2184]
[New LWP 2185]
[New LWP 2186]
[New LWP 2198]
[New LWP 2199]
[New LWP 2200]
[New LWP 2201]
[New LWP 2202]
[New LWP 2203]
[New LWP 2204]
[New LWP 2205]
[New LWP 2206]
[New LWP 2207]
[New LWP 2227]
[New LWP 2238]
[New LWP 2239]
[New LWP 2240]
[New LWP 2241]
[New LWP 2242]
[New LWP 2244]
[New LWP 2250]
[New LWP 2253]
[New LWP 2254]
[New LWP 2255]
[New LWP 2256]
[New LWP 2257]
[New LWP 2258]
[New LWP 2259]
[New LWP 2260]
[New LWP 2261]
[New LWP 2262]
[New LWP 2282]
[New LWP 2283]
[New LWP 2284]
[New LWP 2285]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
Id Target Id Frame
* 1 Thread 0x7f2379dfad40 (LWP 31499) "txn_commit-ites" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
2 Thread 0x7f2377dd6700 (LWP 31502) "txn_commit-ites" 0x00007f238348732a in waitpid () from /lib/x86_64-linux-gnu/libpthread.so.0
3 Thread 0x7f23775d5700 (LWP 31503) "kernel-watcher-" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
4 Thread 0x7f2322143700 (LWP 2023) "rpc reactor-202" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
5 Thread 0x7f235019f700 (LWP 2177) "file cache-evic" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
6 Thread 0x7f234f19d700 (LWP 2178) "sq_acceptor" 0x00007f238188bcb9 in poll () from /lib/x86_64-linux-gnu/libc.so.6
7 Thread 0x7f23721e3700 (LWP 2181) "rpc reactor-218" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
8 Thread 0x7f23729e4700 (LWP 2182) "rpc reactor-218" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
9 Thread 0x7f23719e2700 (LWP 2183) "rpc reactor-218" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
10 Thread 0x7f235f9be700 (LWP 2184) "rpc reactor-218" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
11 Thread 0x7f235e1bb700 (LWP 2185) "MaintenanceMgr " 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
12 Thread 0x7f235c9b8700 (LWP 2186) "maintenance_sch" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
13 Thread 0x7f234b195700 (LWP 2198) "rpc worker-2198" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
14 Thread 0x7f234a994700 (LWP 2199) "rpc worker-2199" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
15 Thread 0x7f234a193700 (LWP 2200) "rpc worker-2200" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
16 Thread 0x7f2349992700 (LWP 2201) "rpc worker-2201" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
17 Thread 0x7f2349191700 (LWP 2202) "rpc worker-2202" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
18 Thread 0x7f2348990700 (LWP 2203) "rpc worker-2203" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
19 Thread 0x7f234818f700 (LWP 2204) "rpc worker-2204" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
20 Thread 0x7f234798e700 (LWP 2205) "rpc worker-2205" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
21 Thread 0x7f234698c700 (LWP 2206) "rpc worker-2206" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
22 Thread 0x7f234618b700 (LWP 2207) "rpc worker-2207" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
23 Thread 0x7f233c177700 (LWP 2227) "rpc worker-2227" 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
24 Thread 0x7f233696c700 (LWP 2238) "diag-logger-223" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
25 Thread 0x7f233616b700 (LWP 2239) "result-tracker-" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
26 Thread 0x7f233596a700 (LWP 2240) "excess-log-dele" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
27 Thread 0x7f2335169700 (LWP 2241) "tcmalloc-memory" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
28 Thread 0x7f2334968700 (LWP 2242) "acceptor-2242" 0x00007f238189a0c7 in accept4 () from /lib/x86_64-linux-gnu/libc.so.6
29 Thread 0x7f2333966700 (LWP 2244) "expired-reserve" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
30 Thread 0x7f2330960700 (LWP 2250) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
31 Thread 0x7f232f95e700 (LWP 2253) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
32 Thread 0x7f232e95c700 (LWP 2254) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
33 Thread 0x7f232e15b700 (LWP 2255) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
34 Thread 0x7f232d95a700 (LWP 2256) "auto-rebalancer" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
35 Thread 0x7f232d159700 (LWP 2257) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
36 Thread 0x7f232c958700 (LWP 2258) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
37 Thread 0x7f232c157700 (LWP 2259) "rpc reactor-225" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
38 Thread 0x7f232b956700 (LWP 2260) "rpc reactor-226" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
39 Thread 0x7f232b155700 (LWP 2261) "auto-leader-reb" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
40 Thread 0x7f232a954700 (LWP 2262) "bgtasks-2262" 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
41 Thread 0x7f231b135700 (LWP 2282) "rpc reactor-228" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
42 Thread 0x7f2331962700 (LWP 2283) "rpc reactor-228" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
43 Thread 0x7f233015f700 (LWP 2284) "rpc reactor-228" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
44 Thread 0x7f231b936700 (LWP 2285) "rpc reactor-228" 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
Thread 44 (Thread 0x7f231b936700 (LWP 2285)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbc966100, timeout=0.099450064272332384) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbc966100, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbbb4e3d8) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ee058) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4ee000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 43 (Thread 0x7f233015f700 (LWP 2284)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbc965600, timeout=0.099706791272637929) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbc965600, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bec98) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4eef58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4eef00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 42 (Thread 0x7f2331962700 (LWP 2283)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbc965b80, timeout=0.099711401272088551) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbc965b80, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bee58) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4eed58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4eed00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 41 (Thread 0x7f231b135700 (LWP 2282)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbbf0ec00, timeout=0.099671702272189577) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbbf0ec00, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bf718) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4efd58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4efd00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 40 (Thread 0x7f232a954700 (LWP 2262)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47c68 in kudu::ConditionVariable::WaitFor (this=this@entry=0x564bbb8c0530, delta=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:121
#2 0x00007f2383b8291b in kudu::master::CatalogManagerBgTasks::Wait (msec=1000, this=0x564bbb8c0500) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/catalog_manager.cc:764
#3 kudu::master::CatalogManagerBgTasks::Run (this=0x564bbb8c0500) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/catalog_manager.cc:892
#4 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb403258) at /usr/include/c++/7/bits/std_function.h:706
#5 kudu::Thread::SuperviseThread (arg=0x564bbb403200) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#6 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 39 (Thread 0x7f232b155700 (LWP 2261)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbc984068, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2383b379a4 in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbc984040) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbc984040) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::master::AutoLeaderRebalancerTask::RunLoop (this=0x564bbc984000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/auto_leader_rebalancer.cc:457
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb403358) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb403300) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 38 (Thread 0x7f232b956700 (LWP 2260)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbbf0d080, timeout=0.099745994271415839) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbbf0d080, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bead8) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb403d58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb403d00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 37 (Thread 0x7f232c157700 (LWP 2259)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbbf0cb00, timeout=0.099745859271479276) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbbf0cb00, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bf018) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb403058) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb403000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 36 (Thread 0x7f232c958700 (LWP 2258)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbbf0c580, timeout=0.099032782271478936) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbbf0c580, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bfe18) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb403f58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb403f00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 35 (Thread 0x7f232d159700 (LWP 2257)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbbf0c000, timeout=0.099719235271550133) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbbf0c000, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3be058) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ef158) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4ef100) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 34 (Thread 0x7f232d95a700 (LWP 2256)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbb761040, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2383b276c7 in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbb761018) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbb761018) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::master::AutoRebalancerTask::RunLoop (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/auto_rebalancer.cc:199
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ee358) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4ee300) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 33 (Thread 0x7f232e15b700 (LWP 2255)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9a2100, timeout=0.099079524271473929) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9a2100, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3be758) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbc1a158) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbbc1a100) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 32 (Thread 0x7f232e95c700 (LWP 2254)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9a1080, timeout=0.099070033271345892) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9a1080, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3bf558) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbc1b758) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbbc1b700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 31 (Thread 0x7f232f95e700 (LWP 2253)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9a2680, timeout=0.099824350271774165) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9a2680, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc3be218) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb402c58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb402c00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 30 (Thread 0x7f2330960700 (LWP 2250)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9a2c00, timeout=0.099004547271306365) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9a2c00, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc35ae58) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbc1b158) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbbc1b100) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 29 (Thread 0x7f2333966700 (LWP 2244)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbc112170, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2383bb978f in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbc112148) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbc112148) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::master::Master::ExpiredReservedTablesDeleterThread (this=0x564bbc112000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/master.cc:598
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc32f558) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc32f500) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 28 (Thread 0x7f2334968700 (LWP 2242)):
#0 0x00007f238189a0c7 in accept4 () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f2382ff2699 in kudu::Socket::Accept (this=this@entry=0x564bbbd034b8, new_conn=new_conn@entry=0x7f2334967510, remote=remote@entry=0x7f2334967700, flags=flags@entry=1) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/net/socket.cc:534
#2 0x00007f2380b81862 in kudu::rpc::AcceptorPool::RunThread (this=0x564bbbd034b0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/acceptor_pool.cc:297
#3 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc32ff58) at /usr/include/c++/7/bits/std_function.h:706
#4 kudu::Thread::SuperviseThread (arg=0x564bbc32ff00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#5 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#6 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 27 (Thread 0x7f2335169700 (LWP 2241)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbc112170, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f23809decab in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbc112148) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbc112148) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::server::ServerBase::TcmallocMemoryGcThread (this=0x564bbc112000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/server/server_base.cc:1274
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc32e958) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc32e900) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 26 (Thread 0x7f233596a700 (LWP 2240)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbc112170, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f23809de53f in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbc112148) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::server::ServerBase::ExcessLogFileDeleterThread (this=0x564bbc112000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/server/server_base.cc:1221
#4 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc32e458) at /usr/include/c++/7/bits/std_function.h:706
#5 kudu::Thread::SuperviseThread (arg=0x564bbc32e400) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#6 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 25 (Thread 0x7f233616b700 (LWP 2239)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbb883680, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2380bbac6f in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbb883658) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbb883658) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::rpc::ResultTracker::RunGCThread (this=0x564bbb8835f0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/result_tracker.cc:467
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbae8158) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbbae8100) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 24 (Thread 0x7f233696c700 (LWP 2238)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbbae9080, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f23809cea12 in kudu::server::DiagnosticsLog::RunThread (this=0x564bbbae9000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/server/diagnostics_log.cc:204
#3 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbae9d58) at /usr/include/c++/7/bits/std_function.h:706
#4 kudu::Thread::SuperviseThread (arg=0x564bbbae9d00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#5 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#6 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 23 (Thread 0x7f233c177700 (LWP 2227)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2383aa4ab0 in kudu::CountDownLatch::Wait (this=0x564bbb9c1998) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:79
#3 kudu::Synchronizer::Wait (this=<synthetic pointer>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/async_util.h:58
#4 kudu::transactions::TxnSystemClient::KeepTransactionAlive (this=0x564bbc447ce0, txn_id=txn_id@entry=0, user="slave", deadline=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/transactions/txn_system_client.cc:322
#5 0x00007f2383c26aea in kudu::transactions::TxnManager::KeepTransactionAlive (this=this@entry=0x564bbc402780, txn_id=0, username="slave", deadline=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/txn_manager.cc:227
#6 0x00007f2383c26f61 in kudu::transactions::TxnManagerServiceImpl::KeepTransactionAlive (this=<optimized out>, req=0x564bbc0e9068, resp=0x564bbc0e9088, ctx=0x564bbb536d00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/txn_manager_service.cc:162
#7 0x00007f2380bd85a8 in std::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const (__args#2=<optimized out>, __args#1=<optimized out>, __args#0=<optimized out>, this=0x564bbc48d3d8) at /usr/include/c++/7/bits/std_function.h:706
#8 kudu::rpc::GeneratedServiceIf::Handle (this=<optimized out>, call=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_if.cc:137
#9 0x00007f2380bd954c in kudu::rpc::ServicePool::RunThread (this=0x564bbb604380) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:229
#10 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbca7e058) at /usr/include/c++/7/bits/std_function.h:706
#11 kudu::Thread::SuperviseThread (arg=0x564bbca7e000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#12 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#13 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 22 (Thread 0x7f234618b700 (LWP 2207)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fe980) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234618a700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb402b58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb402b00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 21 (Thread 0x7f234698c700 (LWP 2206)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fe180) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234698b700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb403c58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb403c00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 20 (Thread 0x7f234798e700 (LWP 2205)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fed80) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234798d700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb751458) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb751400) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 19 (Thread 0x7f234818f700 (LWP 2204)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8ffe00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234818e700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbc1a758) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbbc1a700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 18 (Thread 0x7f2348990700 (LWP 2203)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fea80) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234898f700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbbc1ba58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbbc1ba00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 17 (Thread 0x7f2349191700 (LWP 2202)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8ff180) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f2349190700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb751558) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb751500) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 16 (Thread 0x7f2349992700 (LWP 2201)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fe600) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f2349991700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb751358) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb751300) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 15 (Thread 0x7f234a193700 (LWP 2200)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fee00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234a192700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb751f58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb751f00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 14 (Thread 0x7f234a994700 (LWP 2199)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fec00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234a993700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb750958) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb750900) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 13 (Thread 0x7f234b195700 (LWP 2198)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2380bdb978 in kudu::rpc::LifoServiceQueue::ConsumerState::Wait (this=0x564bbb8fe800) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.h:157
#3 kudu::rpc::LifoServiceQueue::BlockingGet (this=this@entry=0x564bbb6042d0, out=out@entry=0x7f234b194700) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_queue.cc:67
#4 0x00007f2380bd9446 in kudu::rpc::ServicePool::RunThread (this=0x564bbb6042a0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:203
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb750858) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb750800) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 12 (Thread 0x7f235c9b8700 (LWP 2186)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47c68 in kudu::ConditionVariable::WaitFor (this=this@entry=0x564bbb9276f0, delta=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:121
#2 0x00007f2382fbe70b in kudu::MaintenanceManager::RunSchedulerThread (this=0x564bbb927600) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/maintenance_manager.cc:365
#3 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ef558) at /usr/include/c++/7/bits/std_function.h:706
#4 kudu::Thread::SuperviseThread (arg=0x564bbc4ef500) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#5 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#6 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 11 (Thread 0x7f235e1bb700 (LWP 2185)):
#0 0x00007f2383482ad3 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47b6e in kudu::ConditionVariable::Wait (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:55
#2 0x00007f2383037f70 in kudu::ThreadPool::DispatchThread (this=0x564bbbae66c0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/threadpool.cc:675
#3 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4eff58) at /usr/include/c++/7/bits/std_function.h:706
#4 kudu::Thread::SuperviseThread (arg=0x564bbc4eff00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#5 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#6 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 10 (Thread 0x7f235f9be700 (LWP 2184)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9e2c00, timeout=0.099093016271126544) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9e2c00, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc558218) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ef858) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4ef800) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 9 (Thread 0x7f23719e2700 (LWP 2183)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9e0580, timeout=0.09936061027156029) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9e0580, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc559c58) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4efe58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4efe00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 8 (Thread 0x7f23729e4700 (LWP 2182)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb9a3180, timeout=0.099354210271485499) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb9a3180, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc558598) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ee558) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4ee500) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 7 (Thread 0x7f23721e3700 (LWP 2181)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbbf1a000, timeout=0.099155986271398433) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbbf1a000, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbc558758) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc4ee858) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc4ee800) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 6 (Thread 0x7f234f19d700 (LWP 2178)):
#0 0x00007f238188bcb9 in poll () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f2380a1b717 in poll (__timeout=200, __nfds=<optimized out>, __fds=0x564bbb50fe50) at /usr/include/x86_64-linux-gnu/bits/poll2.h:46
#2 master_thread (thread_func_param=0x564bbc81e800) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/squeasel-d83cf6d9af0e2c98c16467a6a035ae0d7ca21cb1/squeasel.c:4515
#3 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#4 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 5 (Thread 0x7f235019f700 (LWP 2177)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbb516d40, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2382f8913f in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbb516d18) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbb516d18) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::FileCache::RunDescriptorExpiry (this=0x564bbb516c60) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/file_cache.cc:801
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbc420a58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbc420a00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 4 (Thread 0x7f2322143700 (LWP 2023)):
#0 0x00007f2381898a47 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f237cf9425d in epoll_poll (loop=0x564bbb5af600, timeout=0.099273505260498496) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev_epoll.c:155
#2 0x00007f237cf98ba3 in ev_run (loop=0x564bbb5af600, flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/libev-4.33/ev.c:4157
#3 0x00007f2380bafeda in ev::loop_ref::run (flags=0, this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/ev++.h:211
#4 kudu::rpc::ReactorThread::RunThread (this=0x564bbb8f7e18) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/reactor.cc:510
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbca7fd58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbca7fd00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 3 (Thread 0x7f23775d5700 (LWP 31503)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbb4029b8, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2382fb4603 in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbb402990) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbb402990) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::KernelStackWatchdog::RunThread (this=0x564bbb402900) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/kernel_stack_watchdog.cc:134
#5 0x00007f238302d5d7 in std::function<void ()>::operator()() const (this=0x564bbb402d58) at /usr/include/c++/7/bits/std_function.h:706
#6 kudu::Thread::SuperviseThread (arg=0x564bbb402d00) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:694
#7 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 2 (Thread 0x7f2377dd6700 (LWP 31502)):
#0 0x00007f238348732a in waitpid () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f23830211bd in kudu::Subprocess::DoWait (this=this@entry=0x7f2377dd5000, wait_status=wait_status@entry=0x0, mode=mode@entry=kudu::Subprocess::BLOCKING) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/subprocess.cc:849
#2 0x00007f2383021732 in kudu::Subprocess::Wait (this=this@entry=0x7f2377dd5000, wait_status=wait_status@entry=0x0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/subprocess.cc:561
#3 0x00007f2382f9dc19 in kudu::PstackWatcher::RunStackDump (argv=std::vector of length 12, capacity 16 = {...}) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/pstack_watcher.cc:260
#4 0x00007f2382f9ec83 in kudu::PstackWatcher::RunGdbStackDump (pid=pid@entry=31499, flags=flags@entry=0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/pstack_watcher.cc:218
#5 0x00007f2382fa1654 in kudu::PstackWatcher::DumpPidStacks (pid=31499, flags=flags@entry=0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/pstack_watcher.cc:169
#6 0x00007f2382fa1802 in kudu::PstackWatcher::DumpStacks (flags=0) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/pstack_watcher.cc:162
#7 0x0000564b89437c00 in kudu::<lambda()>::operator() (__closure=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/test_main.cc:66
#8 std::__invoke_impl<void, kudu::CreateAndStartTimeoutThread()::<lambda()> > (__f=...) at /usr/include/c++/7/bits/invoke.h:60
#9 std::__invoke<kudu::CreateAndStartTimeoutThread()::<lambda()> > (__fn=...) at /usr/include/c++/7/bits/invoke.h:95
#10 std::thread::_Invoker<std::tuple<kudu::CreateAndStartTimeoutThread()::<lambda()> > >::_M_invoke<0> (this=<optimized out>) at /usr/include/c++/7/thread:234
#11 std::thread::_Invoker<std::tuple<kudu::CreateAndStartTimeoutThread()::<lambda()> > >::operator() (this=<optimized out>) at /usr/include/c++/7/thread:243
#12 std::thread::_State_impl<std::thread::_Invoker<std::tuple<kudu::CreateAndStartTimeoutThread()::<lambda()> > > >::_M_run(void) (this=<optimized out>) at /usr/include/c++/7/thread:186
#13 0x00007f2381e3d6df in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#14 0x00007f238347c6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#15 0x00007f238189871f in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 1 (Thread 0x7f2379dfad40 (LWP 31499)):
#0 0x00007f2383482fb9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f2382f47bcc in kudu::ConditionVariable::WaitUntil (this=this@entry=0x564bbca7e0a0, until=...) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/condition_variable.cc:87
#2 0x00007f2383027aef in kudu::CountDownLatch::WaitUntil (when=..., this=0x564bbca7e078) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:89
#3 kudu::CountDownLatch::WaitFor (delta=..., this=0x564bbca7e078) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/countdown_latch.h:99
#4 kudu::ThreadJoiner::Join (this=this@entry=0x7ffc8d35f710) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/thread.cc:547
#5 0x00007f2380bd8c1b in kudu::rpc::ServicePool::Shutdown (this=this@entry=0x564bbb604380) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:110
#6 0x00007f2380bd8e41 in kudu::rpc::ServicePool::~ServicePool (this=0x564bbb604380, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:86
#7 0x00007f2380bd8f81 in kudu::rpc::ServicePool::~ServicePool (this=0x564bbb604380, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/service_pool.cc:87
#8 0x00007f2380b991e9 in kudu::RefCountedThreadSafe<kudu::rpc::RpcService, kudu::DefaultRefCountedThreadSafeTraits<kudu::rpc::RpcService> >::DeleteInternal (x=0x564bbb604380) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/gutil/ref_counted.h:153
#9 kudu::DefaultRefCountedThreadSafeTraits<kudu::rpc::RpcService>::Destruct (x=0x564bbb604380) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/gutil/ref_counted.h:117
#10 kudu::RefCountedThreadSafe<kudu::rpc::RpcService, kudu::DefaultRefCountedThreadSafeTraits<kudu::rpc::RpcService> >::Release (this=0x564bbb604388) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/gutil/ref_counted.h:144
#11 scoped_refptr<kudu::rpc::RpcService>::~scoped_refptr (this=0x564bbc4960e8, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/gutil/ref_counted.h:266
#12 std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> >::~pair (this=0x564bbc4960c8, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/stl_pair.h:208
#13 __gnu_cxx::new_allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > >::destroy<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > > (this=<synthetic pointer>, __p=<optimized out>) at /usr/include/c++/7/ext/new_allocator.h:140
#14 std::allocator_traits<std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > > >::destroy<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > > (__a=<synthetic pointer>..., __p=<optimized out>) at /usr/include/c++/7/bits/alloc_traits.h:487
#15 std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> >, true> > >::_M_deallocate_node (this=<optimized out>, __n=0x564bbc4960c0) at /usr/include/c++/7/bits/hashtable_policy.h:2084
#16 std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> >, true> > >::_M_deallocate_nodes (this=0x7ffc8d35f870, __n=<optimized out>) at /usr/include/c++/7/bits/hashtable_policy.h:2097
#17 std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::clear (this=0x7ffc8d35f870) at /usr/include/c++/7/bits/hashtable.h:2032
#18 std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::~_Hashtable (this=0x7ffc8d35f870, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/hashtable.h:1358
#19 std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, scoped_refptr<kudu::rpc::RpcService>, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, scoped_refptr<kudu::rpc::RpcService> > > >::~unordered_map (this=0x7ffc8d35f870, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/unordered_map.h:101
#20 kudu::rpc::Messenger::UnregisterAllServices (this=0x564bbc436600) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/rpc/messenger.cc:311
#21 0x00007f23809df321 in kudu::server::ServerBase::UnregisterAllServices (this=this@entry=0x564bbc112000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/server/server_base.cc:1415
#22 0x00007f2383bb48eb in kudu::master::Master::ShutdownImpl (this=0x564bbc112000) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/master.cc:565
#23 0x00007f2383c27964 in kudu::master::Master::Shutdown (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/master.h:80
#24 kudu::master::MiniMaster::Shutdown (this=0x564bbb49c010) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/mini_master.cc:124
#25 0x00007f2383c279c4 in kudu::master::MiniMaster::Shutdown (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/master/mini_master.cc:127
#26 0x00007f2383dffb9c in kudu::cluster::InternalMiniCluster::ShutdownNodes (this=0x564bbbb70d10, nodes=kudu::cluster::ClusterNodes::ALL) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/mini-cluster/internal_mini_cluster.cc:248
#27 0x00007f2383dffc43 in kudu::cluster::MiniCluster::Shutdown (this=0x564bbbb70d10) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/mini-cluster/mini_cluster.h:84
#28 kudu::cluster::InternalMiniCluster::~InternalMiniCluster (this=0x564bbbb70d10, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/mini-cluster/internal_mini_cluster.cc:98
#29 0x00007f2383dffed1 in kudu::cluster::InternalMiniCluster::~InternalMiniCluster (this=0x564bbbb70d10, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/mini-cluster/internal_mini_cluster.cc:99
#30 0x0000564b89435ea8 in std::default_delete<kudu::cluster::InternalMiniCluster>::operator() (this=<optimized out>, __ptr=<optimized out>) at /usr/include/c++/7/bits/unique_ptr.h:78
#31 std::unique_ptr<kudu::cluster::InternalMiniCluster, std::default_delete<kudu::cluster::InternalMiniCluster> >::~unique_ptr (this=0x564bbb46b7c0, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/unique_ptr.h:263
#32 kudu::itest::TxnCommitITest::~TxnCommitITest (this=0x564bbb46b770, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/integration-tests/txn_commit-itest.cc:109
#33 0x0000564b8943603f in kudu::itest::TxnCommitITest_TestLoadTxnStatusManagerWhenNoMasters_Test::~TxnCommitITest_TestLoadTxnStatusManagerWhenNoMasters_Test (this=0x564bbb46b770, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/integration-tests/txn_commit-itest.cc:637
#34 kudu::itest::TxnCommitITest_TestLoadTxnStatusManagerWhenNoMasters_Test::~TxnCommitITest_TestLoadTxnStatusManagerWhenNoMasters_Test (this=0x564bbb46b770, __in_chrg=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/integration-tests/txn_commit-itest.cc:637
#35 0x00007f23831060ed in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void> (location=0x7f2383107844 "the test fixture's destructor", method=<optimized out>, object=0x564bbb46b770) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599
#36 testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (object=object@entry=0x564bbb46b770, method=<optimized out>, location=location@entry=0x7f2383107844 "the test fixture's destructor") at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635
#37 0x00007f23830fad10 in testing::TestInfo::Run (this=0x564bbb3f2ea0) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2859
#38 0x00007f23830fb377 in testing::TestSuite::Run (this=0x564bbb3f2360) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012
#39 0x00007f23830fb77c in testing::internal::UnitTestImpl::RunAllTests (this=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870
#40 0x00007f238310660d in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (location=0x7f23831093d8 "auxiliary test code (environments or event listeners)", method=<optimized out>, object=0x564bbb45e280) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599
#41 testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x564bbb45e280, method=<optimized out>, location=location@entry=0x7f23831093d8 "auxiliary test code (environments or event listeners)") at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635
#42 0x00007f23830fae63 in testing::UnitTest::Run (this=0x7f238331a1e0 <testing::UnitTest::GetInstance()::instance>) at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444
#43 0x0000564b8940d317 in RUN_ALL_TESTS () at /home/jenkins-slave/workspace/build_and_test_flaky/thirdparty/installed/uninstrumented/include/gtest/gtest.h:2293
#44 main (argc=<optimized out>, argv=<optimized out>) at /home/jenkins-slave/workspace/build_and_test_flaky/src/kudu/util/test_main.cc:115
W20251025 14:23:14.180899 31499 thread.cc:527] Waited for 800000ms trying to join with rpc worker (tid 2227)
W20251025 14:23:14.181021 2283 master_proxy_rpc.cc:203] Re-attempting LookupRpc request to leader Master (127.30.194.254:42933)
************************* END STACKS ***************************
F20251025 14:23:14.259034 31502 test_main.cc:69] Maximum unit test time exceeded (870 sec)
*** Check failure stack trace: ***
*** Aborted at 1761402194 (unix time) try "date -d @1761402194" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGABRT (@0x3e800007b0b) received by PID 31499 (TID 0x7f2377dd6700) from PID 31499; stack trace: ***
@ 0x7f2383487980 (unknown) at ??:0
@ 0x7f23817b5fb7 gsignal at ??:0
@ 0x7f23817b7921 abort at ??:0
@ 0x7f238261edcd google::LogMessage::Fail() at ??:0
@ 0x7f2382622b93 google::LogMessage::SendToLog() at ??:0
@ 0x7f238261e7cc google::LogMessage::Flush() at ??:0
@ 0x7f238261ff59 google::LogMessageFatal::~LogMessageFatal() at ??:0
@ 0x564b89437c6a _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN4kuduL27CreateAndStartTimeoutThreadEvEUlvE_EEEEE6_M_runEv at ??:0
@ 0x7f2381e3d6df (unknown) at ??:0
@ 0x7f238347c6db start_thread at ??:0
@ 0x7f238189871f clone at ??:0