Diagnosed failure

TokenSignerITest.AuthnTokenLifecycle: Unrecognized error type. Please see the error log for more information.

Full log

[==========] Running 4 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 4 tests from TokenSignerITest
[ RUN      ] TokenSignerITest.TskAtLeaderMaster
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250114 20:52:51.349879   416 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.0.104.62:41559,127.0.104.61:37021,127.0.104.60:38961
I20250114 20:52:51.352443   416 env_posix.cc:2256] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250114 20:52:51.353780   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:51.365895   421 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.366525   422 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.367071   424 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.368213   416 server_base.cc:1034] running on GCE node
I20250114 20:52:51.369122   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:51.369294   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:51.369376   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887971369356 us; error 0 us; skew 500 ppm
I20250114 20:52:51.369742   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:51.376669   416 webserver.cc:458] Webserver started at http://127.0.104.62:40891/ using document root <none> and password file <none>
I20250114 20:52:51.377539   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:51.377637   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:51.378010   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:51.381997   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-0-root/instance:
uuid: "1b7cdb7e644c4cec889dad12a8e66211"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.388681   416 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.004s	sys 0.002s
I20250114 20:52:51.392107   429 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.393679   416 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.000s	sys 0.003s
I20250114 20:52:51.393847   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-0-root
uuid: "1b7cdb7e644c4cec889dad12a8e66211"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.394040   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:51.411880   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:51.412839   416 env_posix.cc:2256] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250114 20:52:51.413174   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:51.434070   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.62:41559
I20250114 20:52:51.434092   480 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.62:41559 every 8 connection(s)
I20250114 20:52:51.436519   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:51.439085   481 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
W20250114 20:52:51.442528   483 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.443140   484 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.443859   486 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.444509   416 server_base.cc:1034] running on GCE node
I20250114 20:52:51.444833   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:51.444919   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:51.445012   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887971444958 us; error 0 us; skew 500 ppm
I20250114 20:52:51.445230   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:51.446770   416 webserver.cc:458] Webserver started at http://127.0.104.61:33825/ using document root <none> and password file <none>
I20250114 20:52:51.447063   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:51.447186   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:51.447342   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:51.448184   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-1-root/instance:
uuid: "618aeafbc5ca4517958b6443719977c0"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.452286   416 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.003s	sys 0.001s
I20250114 20:52:51.454563   491 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.454999   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.002s	sys 0.000s
I20250114 20:52:51.455129   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-1-root
uuid: "618aeafbc5ca4517958b6443719977c0"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.455296   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:51.455637   481 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:51.463853   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:51.464577   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:51.477824   481 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:51.479756   430 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.104.61:37021: connect: Connection refused (error 111)
W20250114 20:52:51.481659   481 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.61:37021: Network error: Client connection negotiation failed: client connection to 127.0.104.61:37021: connect: Connection refused (error 111)
I20250114 20:52:51.485618   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.61:37021
I20250114 20:52:51.485666   545 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.61:37021 every 8 connection(s)
I20250114 20:52:51.487061   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:51.487154   546 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
W20250114 20:52:51.496459   548 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.497098   549 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.498412   546 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:51.499156   551 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.499907   416 server_base.cc:1034] running on GCE node
I20250114 20:52:51.500406   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:51.500505   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:51.500584   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887971500548 us; error 0 us; skew 500 ppm
I20250114 20:52:51.500758   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:51.502111   416 webserver.cc:458] Webserver started at http://127.0.104.60:34111/ using document root <none> and password file <none>
I20250114 20:52:51.502362   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:51.502467   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:51.502609   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:51.503309   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-2-root/instance:
uuid: "24f175e3435c473da9913c9384d1c34a"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.505409   546 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:51.506599   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.004s	sys 0.000s
I20250114 20:52:51.509052   558 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.509414   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:51.509529   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-2-root
uuid: "24f175e3435c473da9913c9384d1c34a"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.509677   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:51.513360   546 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:51.515604   546 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:38961: Network error: Client connection negotiation failed: client connection to 127.0.104.60:38961: connect: Connection refused (error 111)
I20250114 20:52:51.531073   481 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } attempt: 1
I20250114 20:52:51.534080   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:51.534806   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:51.537286   481 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:51.539227   481 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:38961: Network error: Client connection negotiation failed: client connection to 127.0.104.60:38961: connect: Connection refused (error 111)
I20250114 20:52:51.555753   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.60:38961
I20250114 20:52:51.555804   610 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.60:38961 every 8 connection(s)
I20250114 20:52:51.556823   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:52:51.557440   611 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:51.559873   611 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:51.566807   611 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:51.567800   546 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } attempt: 1
I20250114 20:52:51.573527   611 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:51.580688   546 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0: Bootstrap starting.
I20250114 20:52:51.582443   481 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } attempt: 1
I20250114 20:52:51.583776   611 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a: Bootstrap starting.
I20250114 20:52:51.585527   546 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:51.586076   611 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:51.587004   611 log.cc:826] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a: Log is configured to *not* fsync() on all Append() calls
I20250114 20:52:51.590593   611 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a: No bootstrap required, opened a new log
I20250114 20:52:51.590602   546 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0: No bootstrap required, opened a new log
I20250114 20:52:51.593240   481 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211: Bootstrap starting.
I20250114 20:52:51.595175   481 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:51.597222   481 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211: No bootstrap required, opened a new log
I20250114 20:52:51.598876   611 raft_consensus.cc:357] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.598848   481 raft_consensus.cc:357] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.598850   546 raft_consensus.cc:357] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.599335   611 raft_consensus.cc:383] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:51.599344   481 raft_consensus.cc:383] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:51.599426   546 raft_consensus.cc:383] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:51.599488   481 raft_consensus.cc:738] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1b7cdb7e644c4cec889dad12a8e66211, State: Initialized, Role: FOLLOWER
I20250114 20:52:51.599511   546 raft_consensus.cc:738] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 618aeafbc5ca4517958b6443719977c0, State: Initialized, Role: FOLLOWER
I20250114 20:52:51.599491   611 raft_consensus.cc:738] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 24f175e3435c473da9913c9384d1c34a, State: Initialized, Role: FOLLOWER
I20250114 20:52:51.600450   611 consensus_queue.cc:260] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.600481   481 consensus_queue.cc:260] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.600524   546 consensus_queue.cc:260] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.601996   619 sys_catalog.cc:455] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.602425   619 sys_catalog.cc:458] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.603331   611 sys_catalog.cc:564] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:51.607432   620 sys_catalog.cc:455] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.607807   620 sys_catalog.cc:458] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.608650   546 sys_catalog.cc:564] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:51.608924   481 sys_catalog.cc:564] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:51.608712   621 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.609200   621 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.622460   620 raft_consensus.cc:491] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:52:51.622808   620 raft_consensus.cc:513] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
W20250114 20:52:51.624935   640 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:51.625133   640 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:51.626399   620 leader_election.cc:290] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1b7cdb7e644c4cec889dad12a8e66211 (127.0.104.62:41559), 24f175e3435c473da9913c9384d1c34a (127.0.104.60:38961)
I20250114 20:52:51.626683   586 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "618aeafbc5ca4517958b6443719977c0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "24f175e3435c473da9913c9384d1c34a" is_pre_election: true
I20250114 20:52:51.627337   586 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 618aeafbc5ca4517958b6443719977c0 in term 0.
I20250114 20:52:51.627748   456 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "618aeafbc5ca4517958b6443719977c0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1b7cdb7e644c4cec889dad12a8e66211" is_pre_election: true
I20250114 20:52:51.628124   456 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 618aeafbc5ca4517958b6443719977c0 in term 0.
I20250114 20:52:51.628966   494 leader_election.cc:304] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1b7cdb7e644c4cec889dad12a8e66211, 618aeafbc5ca4517958b6443719977c0; no voters: 
I20250114 20:52:51.630237   620 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:52:51.630426   620 raft_consensus.cc:491] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:52:51.630533   620 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:51.636529   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
W20250114 20:52:51.638163   654 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:51.638296   654 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:51.642428   620 raft_consensus.cc:513] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.643276   620 leader_election.cc:290] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [CANDIDATE]: Term 1 election: Requested vote from peers 1b7cdb7e644c4cec889dad12a8e66211 (127.0.104.62:41559), 24f175e3435c473da9913c9384d1c34a (127.0.104.60:38961)
I20250114 20:52:51.644388   586 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "618aeafbc5ca4517958b6443719977c0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "24f175e3435c473da9913c9384d1c34a"
I20250114 20:52:51.644605   586 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:51.646584   456 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "618aeafbc5ca4517958b6443719977c0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1b7cdb7e644c4cec889dad12a8e66211"
I20250114 20:52:51.646818   456 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:51.648346   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:52:51.649529   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:51.650204   655 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:51.650337   655 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:52:51.654271   656 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.655150   657 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.655974   659 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.656721   416 server_base.cc:1034] running on GCE node
I20250114 20:52:51.657033   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:51.657115   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:51.657174   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887971657152 us; error 0 us; skew 500 ppm
I20250114 20:52:51.657377   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:51.658880   416 webserver.cc:458] Webserver started at http://127.0.104.1:32969/ using document root <none> and password file <none>
I20250114 20:52:51.659143   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:51.659232   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:51.659390   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:51.660148   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-0-root/instance:
uuid: "baf2aa3bc18f495e98016d12688027f4"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.663092   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.002s	sys 0.000s
I20250114 20:52:51.665596   664 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
W20250114 20:52:51.665814   435 tablet.cc:2367] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250114 20:52:51.666049   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.001s
I20250114 20:52:51.666146   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-0-root
uuid: "baf2aa3bc18f495e98016d12688027f4"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.666262   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:51.684048   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:51.684772   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:51.685693   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:51.687316   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:51.687420   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.687997   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:51.688079   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.001s	user 0.000s	sys 0.000s
I20250114 20:52:51.713554   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.1:40927
I20250114 20:52:51.713565   726 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.1:40927 every 8 connection(s)
I20250114 20:52:51.714972   586 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 618aeafbc5ca4517958b6443719977c0 in term 1.
I20250114 20:52:51.715000   456 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 618aeafbc5ca4517958b6443719977c0 in term 1.
I20250114 20:52:51.715633   495 leader_election.cc:304] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 24f175e3435c473da9913c9384d1c34a, 618aeafbc5ca4517958b6443719977c0; no voters: 
I20250114 20:52:51.716617   620 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:52:51.737603   620 raft_consensus.cc:695] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 1 LEADER]: Becoming Leader. State: Replica: 618aeafbc5ca4517958b6443719977c0, State: Running, Role: LEADER
I20250114 20:52:51.738788   727 heartbeater.cc:346] Connected to a master server at 127.0.104.60:38961
I20250114 20:52:51.739080   727 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:51.740314   727 heartbeater.cc:510] Master 127.0.104.60:38961 requested a full tablet report, sending...
I20250114 20:52:51.738567   620 consensus_queue.cc:237] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } }
I20250114 20:52:51.743463   576 ts_manager.cc:194] Registered new tserver with Master: baf2aa3bc18f495e98016d12688027f4 (127.0.104.1:40927)
I20250114 20:52:51.747409   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:51.752931   731 heartbeater.cc:346] Connected to a master server at 127.0.104.61:37021
I20250114 20:52:51.753150   731 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:51.753820   731 heartbeater.cc:510] Master 127.0.104.61:37021 requested a full tablet report, sending...
I20250114 20:52:51.756448   510 ts_manager.cc:194] Registered new tserver with Master: baf2aa3bc18f495e98016d12688027f4 (127.0.104.1:40927)
I20250114 20:52:51.755440   729 sys_catalog.cc:455] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 618aeafbc5ca4517958b6443719977c0. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.757189   729 sys_catalog.cc:458] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:51.764092   728 heartbeater.cc:346] Connected to a master server at 127.0.104.62:41559
I20250114 20:52:51.764331   728 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:51.765023   728 heartbeater.cc:510] Master 127.0.104.62:41559 requested a full tablet report, sending...
I20250114 20:52:51.766388   446 ts_manager.cc:194] Registered new tserver with Master: baf2aa3bc18f495e98016d12688027f4 (127.0.104.1:40927)
W20250114 20:52:51.766444   737 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.768517   736 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:52:51.770242   736 catalog_manager.cc:1485] Initializing Kudu cluster ID...
W20250114 20:52:51.771461   735 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:51.780973   739 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.782040   416 server_base.cc:1034] running on GCE node
I20250114 20:52:51.783222   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:51.783321   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:51.783397   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887971783359 us; error 0 us; skew 500 ppm
I20250114 20:52:51.783586   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:51.784530   456 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 1 FOLLOWER]: Refusing update from remote peer 618aeafbc5ca4517958b6443719977c0: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:51.785076   586 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 1 FOLLOWER]: Refusing update from remote peer 618aeafbc5ca4517958b6443719977c0: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:51.785331   729 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:51.785770   620 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:51.802000   619 sys_catalog.cc:455] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: SysCatalogTable state changed. Reason: New leader 618aeafbc5ca4517958b6443719977c0. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.802348   619 sys_catalog.cc:458] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.804584   741 mvcc.cc:204] Tried to move back new op lower bound from 7114293132419080192 to 7114293132261953536. Current Snapshot: MvccSnapshot[applied={T|T < 7114293132419080192}]
I20250114 20:52:51.814666   619 sys_catalog.cc:455] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.814960   619 sys_catalog.cc:458] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.818130   620 sys_catalog.cc:455] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.818459   620 sys_catalog.cc:458] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:51.821684   736 catalog_manager.cc:1348] Generated new cluster ID: a9f5628f123f4491a89dffdf0b21a2b0
I20250114 20:52:51.821800   736 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:52:51.822115   416 webserver.cc:458] Webserver started at http://127.0.104.2:32875/ using document root <none> and password file <none>
I20250114 20:52:51.822469   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:51.822980   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:51.823168   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:51.824040   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-1-root/instance:
uuid: "4279213b86a64644b43b4c680b653859"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.824733   621 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 618aeafbc5ca4517958b6443719977c0. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.824986   621 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.827967   416 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.000s	sys 0.003s
I20250114 20:52:51.830816   752 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.831256   416 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.000s	sys 0.002s
I20250114 20:52:51.831389   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-1-root
uuid: "4279213b86a64644b43b4c680b653859"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.831560   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:51.833863   620 sys_catalog.cc:455] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.834141   620 sys_catalog.cc:458] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:51.841235   736 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:52:51.843144   736 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:52:51.845702   621 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "618aeafbc5ca4517958b6443719977c0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1b7cdb7e644c4cec889dad12a8e66211" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 41559 } } peers { permanent_uuid: "618aeafbc5ca4517958b6443719977c0" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37021 } } peers { permanent_uuid: "24f175e3435c473da9913c9384d1c34a" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 38961 } } }
I20250114 20:52:51.846902   621 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:51.864869   736 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0: Generated new TSK 0
I20250114 20:52:51.866202   736 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:52:51.878737   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:51.879771   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:51.880481   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:51.881454   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:51.881527   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.881639   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:51.881695   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.902200   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.2:43287
I20250114 20:52:51.902238   815 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.2:43287 every 8 connection(s)
I20250114 20:52:51.907341   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:51.926385   816 heartbeater.cc:346] Connected to a master server at 127.0.104.60:38961
I20250114 20:52:51.926591   816 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:51.927240   816 heartbeater.cc:510] Master 127.0.104.60:38961 requested a full tablet report, sending...
I20250114 20:52:51.928596   576 ts_manager.cc:194] Registered new tserver with Master: 4279213b86a64644b43b4c680b653859 (127.0.104.2:43287)
W20250114 20:52:51.931279   823 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.942003   817 heartbeater.cc:346] Connected to a master server at 127.0.104.62:41559
I20250114 20:52:51.942193   817 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:51.943347   817 heartbeater.cc:510] Master 127.0.104.62:41559 requested a full tablet report, sending...
I20250114 20:52:51.943410   818 heartbeater.cc:346] Connected to a master server at 127.0.104.61:37021
I20250114 20:52:51.944026   818 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:51.945336   818 heartbeater.cc:510] Master 127.0.104.61:37021 requested a full tablet report, sending...
I20250114 20:52:51.945437   446 ts_manager.cc:194] Registered new tserver with Master: 4279213b86a64644b43b4c680b653859 (127.0.104.2:43287)
I20250114 20:52:51.946419   510 ts_manager.cc:194] Registered new tserver with Master: 4279213b86a64644b43b4c680b653859 (127.0.104.2:43287)
W20250114 20:52:51.947670   824 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.950346   510 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:37562
I20250114 20:52:51.953485   416 server_base.cc:1034] running on GCE node
W20250114 20:52:51.954303   826 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:51.954691   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:51.954775   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:51.954835   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887971954814 us; error 0 us; skew 500 ppm
I20250114 20:52:51.955017   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:51.956511   416 webserver.cc:458] Webserver started at http://127.0.104.3:38067/ using document root <none> and password file <none>
I20250114 20:52:51.956782   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:51.956877   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:51.957046   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:51.957737   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-2-root/instance:
uuid: "192ab7415abd49ba92502b54e2b9bbd7"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.960587   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.004s	sys 0.000s
I20250114 20:52:51.963128   831 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.963521   416 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.001s
I20250114 20:52:51.963639   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-2-root
uuid: "192ab7415abd49ba92502b54e2b9bbd7"
format_stamp: "Formatted at 2025-01-14 20:52:51 on dist-test-slave-npjh"
I20250114 20:52:51.963773   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskAtLeaderMaster.1736887971271855-416-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:51.986707   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:51.987591   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:51.988919   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:51.990103   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:51.990211   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:51.990320   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:51.990401   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.012744   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.3:35661
I20250114 20:52:52.012768   893 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.3:35661 every 8 connection(s)
I20250114 20:52:52.039245   895 heartbeater.cc:346] Connected to a master server at 127.0.104.62:41559
I20250114 20:52:52.039270   894 heartbeater.cc:346] Connected to a master server at 127.0.104.60:38961
I20250114 20:52:52.039449   895 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.039505   894 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.040122   895 heartbeater.cc:510] Master 127.0.104.62:41559 requested a full tablet report, sending...
I20250114 20:52:52.040151   894 heartbeater.cc:510] Master 127.0.104.60:38961 requested a full tablet report, sending...
I20250114 20:52:52.041342   576 ts_manager.cc:194] Registered new tserver with Master: 192ab7415abd49ba92502b54e2b9bbd7 (127.0.104.3:35661)
I20250114 20:52:52.042351   446 ts_manager.cc:194] Registered new tserver with Master: 192ab7415abd49ba92502b54e2b9bbd7 (127.0.104.3:35661)
I20250114 20:52:52.043560   896 heartbeater.cc:346] Connected to a master server at 127.0.104.61:37021
I20250114 20:52:52.043707   896 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.044229   896 heartbeater.cc:510] Master 127.0.104.61:37021 requested a full tablet report, sending...
I20250114 20:52:52.045099   510 ts_manager.cc:194] Registered new tserver with Master: 192ab7415abd49ba92502b54e2b9bbd7 (127.0.104.3:35661)
I20250114 20:52:52.045990   416 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.030355711s
I20250114 20:52:52.046270   510 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:37576
I20250114 20:52:52.049803   416 tablet_server.cc:178] TabletServer@127.0.104.1:0 shutting down...
I20250114 20:52:52.062937   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:52.077270   416 tablet_server.cc:195] TabletServer@127.0.104.1:0 shutdown complete.
I20250114 20:52:52.083328   416 tablet_server.cc:178] TabletServer@127.0.104.2:0 shutting down...
I20250114 20:52:52.094831   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:52.108466   416 tablet_server.cc:195] TabletServer@127.0.104.2:0 shutdown complete.
I20250114 20:52:52.111946   416 tablet_server.cc:178] TabletServer@127.0.104.3:0 shutting down...
I20250114 20:52:52.122151   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:52.135679   416 tablet_server.cc:195] TabletServer@127.0.104.3:0 shutdown complete.
I20250114 20:52:52.138804   416 master.cc:537] Master@127.0.104.62:41559 shutting down...
I20250114 20:52:52.147519   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:52.147854   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:52.148012   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 1b7cdb7e644c4cec889dad12a8e66211: stopping tablet replica
I20250114 20:52:52.163995   416 master.cc:559] Master@127.0.104.62:41559 shutdown complete.
I20250114 20:52:52.169896   416 master.cc:537] Master@127.0.104.61:37021 shutting down...
I20250114 20:52:52.181433   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:52:52.181979   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:52.182209   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 618aeafbc5ca4517958b6443719977c0: stopping tablet replica
I20250114 20:52:52.197013   416 master.cc:559] Master@127.0.104.61:37021 shutdown complete.
I20250114 20:52:52.201346   416 master.cc:537] Master@127.0.104.60:38961 shutting down...
I20250114 20:52:52.210714   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:52.210992   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:52.211122   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 24f175e3435c473da9913c9384d1c34a: stopping tablet replica
I20250114 20:52:52.225242   416 master.cc:559] Master@127.0.104.60:38961 shutdown complete.
[       OK ] TokenSignerITest.TskAtLeaderMaster (888 ms)
[ RUN      ] TokenSignerITest.TskClusterRestart
I20250114 20:52:52.236101   416 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.0.104.62:42787,127.0.104.61:34355,127.0.104.60:39313
I20250114 20:52:52.236770   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:52.239915   902 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.240597   903 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.241628   905 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.242383   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.242662   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.242728   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.242799   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972242766 us; error 0 us; skew 500 ppm
I20250114 20:52:52.242957   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.244271   416 webserver.cc:458] Webserver started at http://127.0.104.62:42325/ using document root <none> and password file <none>
I20250114 20:52:52.244503   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.244597   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:52.244724   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:52.245398   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root/instance:
uuid: "e1995c3cacc2422fb34c78c4232ebf49"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.247864   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:52:52.249620   910 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.249958   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:52.250077   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root
uuid: "e1995c3cacc2422fb34c78c4232ebf49"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.250205   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.272154   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.272838   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.286445   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.62:42787
I20250114 20:52:52.286504   961 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.62:42787 every 8 connection(s)
I20250114 20:52:52.287815   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:52.287904   962 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
W20250114 20:52:52.291221   964 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.291774   962 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:52.292124   965 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.293061   967 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.293587   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.293850   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.293939   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.294044   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972293992 us; error 0 us; skew 500 ppm
I20250114 20:52:52.294239   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.295681   416 webserver.cc:458] Webserver started at http://127.0.104.61:37507/ using document root <none> and password file <none>
I20250114 20:52:52.295984   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.296101   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:52.296278   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:52.297044   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root/instance:
uuid: "2edbe75e77474dba9065d6d1b9ed00d3"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.299489   962 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:52.300279   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.003s	sys 0.000s
W20250114 20:52:52.301604   962 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.61:34355: Network error: Client connection negotiation failed: client connection to 127.0.104.61:34355: connect: Connection refused (error 111)
I20250114 20:52:52.302387   975 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.302754   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.002s	sys 0.000s
I20250114 20:52:52.302911   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root
uuid: "2edbe75e77474dba9065d6d1b9ed00d3"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.303057   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.316926   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.317571   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.331823   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.61:34355
I20250114 20:52:52.331856  1026 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.61:34355 every 8 connection(s)
I20250114 20:52:52.333200   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:52.333278  1027 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
W20250114 20:52:52.337162  1029 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.337157  1027 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:52.337982  1030 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.338995   416 server_base.cc:1034] running on GCE node
W20250114 20:52:52.339598  1032 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.340061   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.340152   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.340229   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972340190 us; error 0 us; skew 500 ppm
I20250114 20:52:52.340437   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.341826   416 webserver.cc:458] Webserver started at http://127.0.104.60:44843/ using document root <none> and password file <none>
I20250114 20:52:52.342056   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.342146   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:52.342322   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:52.343147   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root/instance:
uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.344529  1027 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:52.346336   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.004s	sys 0.000s
I20250114 20:52:52.348152  1039 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.349028   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.002s	sys 0.000s
I20250114 20:52:52.349180   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root
uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.349351   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.355270  1027 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:52.357345  1027 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:39313: Network error: Client connection negotiation failed: client connection to 127.0.104.60:39313: connect: Connection refused (error 111)
I20250114 20:52:52.366830   962 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } attempt: 1
I20250114 20:52:52.373685   962 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:52.375396   962 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:39313: Network error: Client connection negotiation failed: client connection to 127.0.104.60:39313: connect: Connection refused (error 111)
I20250114 20:52:52.376776   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.377310   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.391723   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.60:39313
I20250114 20:52:52.391764  1091 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.60:39313 every 8 connection(s)
I20250114 20:52:52.392717   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:52:52.393245  1092 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:52.394547  1027 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } attempt: 1
I20250114 20:52:52.395891  1092 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:52.403080  1092 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:52.403612   962 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } attempt: 1
I20250114 20:52:52.405311  1027 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: Bootstrap starting.
I20250114 20:52:52.407044  1027 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:52.409165  1027 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: No bootstrap required, opened a new log
I20250114 20:52:52.409591  1092 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:52.410043  1027 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.410308  1027 raft_consensus.cc:383] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:52.410399  1027 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2edbe75e77474dba9065d6d1b9ed00d3, State: Initialized, Role: FOLLOWER
I20250114 20:52:52.410663  1027 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.417200   962 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Bootstrap starting.
I20250114 20:52:52.418354  1098 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.418643  1098 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.419168   962 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:52.419243  1027 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:52.420955   962 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: No bootstrap required, opened a new log
I20250114 20:52:52.422016   962 raft_consensus.cc:357] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.422187   962 raft_consensus.cc:383] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:52.422248   962 raft_consensus.cc:738] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1995c3cacc2422fb34c78c4232ebf49, State: Initialized, Role: FOLLOWER
I20250114 20:52:52.422523  1092 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: Bootstrap starting.
I20250114 20:52:52.422557   962 consensus_queue.cc:260] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.423370  1103 sys_catalog.cc:455] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.423637  1103 sys_catalog.cc:458] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.423723   962 sys_catalog.cc:564] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:52.424650  1092 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:52.427049  1092 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: No bootstrap required, opened a new log
I20250114 20:52:52.427925  1092 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.428180  1092 raft_consensus.cc:383] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:52.428301  1092 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2ef5ba4b0f1441ac8717dcd59d14d3ed, State: Initialized, Role: FOLLOWER
I20250114 20:52:52.428599  1092 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.429344  1113 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.429708  1092 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:52.429706  1113 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: This master's current role is: FOLLOWER
W20250114 20:52:52.438154  1125 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:52.438450  1125 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:52:52.439419  1130 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:52.439528  1130 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:52.441614   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
I20250114 20:52:52.441753   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:52:52.442492   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:52.442613  1135 catalog_manager.cc:1559] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:52.442723  1135 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:52:52.447991  1136 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.448726  1103 raft_consensus.cc:491] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:52:52.448913  1103 raft_consensus.cc:513] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
W20250114 20:52:52.449101  1137 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.449661  1103 leader_election.cc:290] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 2edbe75e77474dba9065d6d1b9ed00d3 (127.0.104.61:34355), 2ef5ba4b0f1441ac8717dcd59d14d3ed (127.0.104.60:39313)
W20250114 20:52:52.450254  1139 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.450767  1002 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" is_pre_election: true
I20250114 20:52:52.450999   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.451067  1002 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 0.
I20250114 20:52:52.451354  1067 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" is_pre_election: true
I20250114 20:52:52.451587   911 leader_election.cc:304] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2edbe75e77474dba9065d6d1b9ed00d3, e1995c3cacc2422fb34c78c4232ebf49; no voters: 
I20250114 20:52:52.451598  1067 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 0.
I20250114 20:52:52.451694   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.451798   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.451856   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972451830 us; error 0 us; skew 500 ppm
I20250114 20:52:52.451915  1103 raft_consensus.cc:2798] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:52:52.452029  1103 raft_consensus.cc:491] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:52:52.452086   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.452133  1103 raft_consensus.cc:3054] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:52.453644   416 webserver.cc:458] Webserver started at http://127.0.104.1:35333/ using document root <none> and password file <none>
I20250114 20:52:52.453945   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.454046   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:52.454206   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:52.454924   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root/instance:
uuid: "a28114ddab024404aa7ab12174e6e367"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.455031  1103 raft_consensus.cc:513] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.455742  1103 leader_election.cc:290] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 1 election: Requested vote from peers 2edbe75e77474dba9065d6d1b9ed00d3 (127.0.104.61:34355), 2ef5ba4b0f1441ac8717dcd59d14d3ed (127.0.104.60:39313)
I20250114 20:52:52.456166  1002 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2edbe75e77474dba9065d6d1b9ed00d3"
I20250114 20:52:52.456390  1002 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:52.458227   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.000s	sys 0.004s
I20250114 20:52:52.459092  1002 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 1.
I20250114 20:52:52.459508  1067 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed"
I20250114 20:52:52.459550   911 leader_election.cc:304] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2edbe75e77474dba9065d6d1b9ed00d3, e1995c3cacc2422fb34c78c4232ebf49; no voters: 
I20250114 20:52:52.459708  1067 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:52.459844  1103 raft_consensus.cc:2798] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:52:52.460242  1103 raft_consensus.cc:695] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 LEADER]: Becoming Leader. State: Replica: e1995c3cacc2422fb34c78c4232ebf49, State: Running, Role: LEADER
I20250114 20:52:52.460584  1103 consensus_queue.cc:237] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.460785  1145 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.461144   416 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.002s
I20250114 20:52:52.461270   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root
uuid: "a28114ddab024404aa7ab12174e6e367"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.461422   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.462741  1067 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 1.
I20250114 20:52:52.463977  1144 sys_catalog.cc:455] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e1995c3cacc2422fb34c78c4232ebf49. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.464247  1144 sys_catalog.cc:458] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:52.464754  1151 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:52:52.466655  1151 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:52:52.469772  1067 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Refusing update from remote peer e1995c3cacc2422fb34c78c4232ebf49: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:52.470306  1002 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Refusing update from remote peer e1995c3cacc2422fb34c78c4232ebf49: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:52.470414  1103 consensus_queue.cc:1035] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:52.470961  1144 consensus_queue.cc:1035] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:52.475639   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.476514   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.476943  1098 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e1995c3cacc2422fb34c78c4232ebf49. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.477293  1098 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.477706   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:52.480808  1144 sys_catalog.cc:455] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.481101  1144 sys_catalog.cc:458] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:52.482355   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:52.482410  1151 catalog_manager.cc:1348] Generated new cluster ID: 291a1c5112434d14b0d0900f78acfeaa
I20250114 20:52:52.482479   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.482529  1151 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:52:52.482595   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:52.482676   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.488461  1113 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: SysCatalogTable state changed. Reason: New leader e1995c3cacc2422fb34c78c4232ebf49. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.488782  1113 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.489943  1103 sys_catalog.cc:455] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.490264  1103 sys_catalog.cc:458] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:52.490767  1098 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.491050  1098 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.499995  1113 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.500327  1113 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.509052  1151 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:52:52.510186  1151 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:52:52.530102  1151 catalog_manager.cc:5899] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Generated new TSK 0
I20250114 20:52:52.530417  1151 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:52:52.533296   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.1:44981
I20250114 20:52:52.533360  1219 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.1:44981 every 8 connection(s)
I20250114 20:52:52.555182   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:52.556746  1220 heartbeater.cc:346] Connected to a master server at 127.0.104.60:39313
I20250114 20:52:52.556912  1220 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.557463  1220 heartbeater.cc:510] Master 127.0.104.60:39313 requested a full tablet report, sending...
I20250114 20:52:52.558403  1221 heartbeater.cc:346] Connected to a master server at 127.0.104.62:42787
I20250114 20:52:52.558555  1221 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.558712  1057 ts_manager.cc:194] Registered new tserver with Master: a28114ddab024404aa7ab12174e6e367 (127.0.104.1:44981)
I20250114 20:52:52.559067  1221 heartbeater.cc:510] Master 127.0.104.62:42787 requested a full tablet report, sending...
I20250114 20:52:52.560038   927 ts_manager.cc:194] Registered new tserver with Master: a28114ddab024404aa7ab12174e6e367 (127.0.104.1:44981)
I20250114 20:52:52.560379  1222 heartbeater.cc:346] Connected to a master server at 127.0.104.61:34355
I20250114 20:52:52.560515  1222 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.561050  1222 heartbeater.cc:510] Master 127.0.104.61:34355 requested a full tablet report, sending...
I20250114 20:52:52.561777   927 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:57118
I20250114 20:52:52.562292   992 ts_manager.cc:194] Registered new tserver with Master: a28114ddab024404aa7ab12174e6e367 (127.0.104.1:44981)
W20250114 20:52:52.562939  1227 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.565111  1228 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.566427  1230 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.567159   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.567425   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.567488   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.567555   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972567525 us; error 0 us; skew 500 ppm
I20250114 20:52:52.567708   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.569108   416 webserver.cc:458] Webserver started at http://127.0.104.2:36835/ using document root <none> and password file <none>
I20250114 20:52:52.569324   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.569416   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:52.569545   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:52.570200   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root/instance:
uuid: "ea9d1a51dd7844dca8e37c003492afe7"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.572577   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:52:52.574139  1235 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.574493   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.001s
I20250114 20:52:52.574594   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root
uuid: "ea9d1a51dd7844dca8e37c003492afe7"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.574728   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.586369   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.587445   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.588038   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:52.588950   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:52.589025   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.589138   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:52.589187   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.606340   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.2:34521
I20250114 20:52:52.606369  1297 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.2:34521 every 8 connection(s)
I20250114 20:52:52.609079   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:52.617587  1298 heartbeater.cc:346] Connected to a master server at 127.0.104.60:39313
I20250114 20:52:52.617791  1298 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.618530  1298 heartbeater.cc:510] Master 127.0.104.60:39313 requested a full tablet report, sending...
I20250114 20:52:52.619810  1057 ts_manager.cc:194] Registered new tserver with Master: ea9d1a51dd7844dca8e37c003492afe7 (127.0.104.2:34521)
I20250114 20:52:52.621182  1300 heartbeater.cc:346] Connected to a master server at 127.0.104.61:34355
I20250114 20:52:52.621333  1299 heartbeater.cc:346] Connected to a master server at 127.0.104.62:42787
I20250114 20:52:52.621354  1300 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.621546  1299 heartbeater.cc:463] Registering TS with master...
W20250114 20:52:52.622113  1305 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.622170  1300 heartbeater.cc:510] Master 127.0.104.61:34355 requested a full tablet report, sending...
I20250114 20:52:52.622157  1299 heartbeater.cc:510] Master 127.0.104.62:42787 requested a full tablet report, sending...
W20250114 20:52:52.622676  1306 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.623095   927 ts_manager.cc:194] Registered new tserver with Master: ea9d1a51dd7844dca8e37c003492afe7 (127.0.104.2:34521)
I20250114 20:52:52.623229   992 ts_manager.cc:194] Registered new tserver with Master: ea9d1a51dd7844dca8e37c003492afe7 (127.0.104.2:34521)
I20250114 20:52:52.624176   927 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:57120
W20250114 20:52:52.625742  1308 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.626483   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.626734   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.626802   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.626843   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972626830 us; error 0 us; skew 500 ppm
I20250114 20:52:52.626987   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.628753   416 webserver.cc:458] Webserver started at http://127.0.104.3:41955/ using document root <none> and password file <none>
I20250114 20:52:52.628986   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.629052   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:52.629168   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:52.629750   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root/instance:
uuid: "c4377648ba6740e8a75602fb58a7e205"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.632151   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.000s	sys 0.003s
I20250114 20:52:52.633801  1313 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.634177   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:52.634277   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root
uuid: "c4377648ba6740e8a75602fb58a7e205"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.634388   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.658527   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.659125   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.659739   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:52.660712   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:52.660806   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.660904   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:52.660982   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.678925   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.3:38919
I20250114 20:52:52.678962  1375 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.3:38919 every 8 connection(s)
I20250114 20:52:52.690577  1377 heartbeater.cc:346] Connected to a master server at 127.0.104.62:42787
I20250114 20:52:52.691293  1377 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.691951  1377 heartbeater.cc:510] Master 127.0.104.62:42787 requested a full tablet report, sending...
I20250114 20:52:52.693281   927 ts_manager.cc:194] Registered new tserver with Master: c4377648ba6740e8a75602fb58a7e205 (127.0.104.3:38919)
I20250114 20:52:52.694291   927 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:57136
I20250114 20:52:52.696928  1378 heartbeater.cc:346] Connected to a master server at 127.0.104.61:34355
I20250114 20:52:52.697096  1378 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.697494  1378 heartbeater.cc:510] Master 127.0.104.61:34355 requested a full tablet report, sending...
I20250114 20:52:52.698311  1376 heartbeater.cc:346] Connected to a master server at 127.0.104.60:39313
I20250114 20:52:52.698446  1376 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:52.698580   992 ts_manager.cc:194] Registered new tserver with Master: c4377648ba6740e8a75602fb58a7e205 (127.0.104.3:38919)
I20250114 20:52:52.698822  1376 heartbeater.cc:510] Master 127.0.104.60:39313 requested a full tablet report, sending...
I20250114 20:52:52.699680  1057 ts_manager.cc:194] Registered new tserver with Master: c4377648ba6740e8a75602fb58a7e205 (127.0.104.3:38919)
I20250114 20:52:52.700119   416 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012901618s
I20250114 20:52:52.701705   416 tablet_server.cc:178] TabletServer@127.0.104.1:0 shutting down...
I20250114 20:52:52.709511   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:52.722718   416 tablet_server.cc:195] TabletServer@127.0.104.1:0 shutdown complete.
I20250114 20:52:52.725827   416 tablet_server.cc:178] TabletServer@127.0.104.2:0 shutting down...
I20250114 20:52:52.734531   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:52.748306   416 tablet_server.cc:195] TabletServer@127.0.104.2:0 shutdown complete.
I20250114 20:52:52.751261   416 tablet_server.cc:178] TabletServer@127.0.104.3:0 shutting down...
I20250114 20:52:52.759737   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:52.773149   416 tablet_server.cc:195] TabletServer@127.0.104.3:0 shutdown complete.
I20250114 20:52:52.775918   416 master.cc:537] Master@127.0.104.62:42787 shutting down...
I20250114 20:52:52.782455   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:52:52.782840   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:52.783030   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: stopping tablet replica
I20250114 20:52:52.797344   416 master.cc:559] Master@127.0.104.62:42787 shutdown complete.
I20250114 20:52:52.801448   416 master.cc:537] Master@127.0.104.61:34355 shutting down...
I20250114 20:52:52.808092   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:52.808360   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:52.808497   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: stopping tablet replica
I20250114 20:52:52.822461   416 master.cc:559] Master@127.0.104.61:34355 shutdown complete.
I20250114 20:52:52.826309   416 master.cc:537] Master@127.0.104.60:39313 shutting down...
I20250114 20:52:52.833456   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:52.833727   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:52.833850   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: stopping tablet replica
I20250114 20:52:52.837728   416 master.cc:559] Master@127.0.104.60:39313 shutdown complete.
I20250114 20:52:52.842024   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:52.845116  1384 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.845700  1385 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.846263  1387 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.847157   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.847419   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.847496   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.847549   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972847531 us; error 0 us; skew 500 ppm
I20250114 20:52:52.847728   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.849031   416 webserver.cc:458] Webserver started at http://127.0.104.62:45047/ using document root <none> and password file <none>
I20250114 20:52:52.849277   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.849341   416 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:52:52.851243   416 fs_manager.cc:714] Time spent opening directory manager: real 0.001s	user 0.002s	sys 0.000s
I20250114 20:52:52.852612  1392 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.852993   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:52.853089   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root
uuid: "e1995c3cacc2422fb34c78c4232ebf49"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.853210   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.866918   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.867503   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.880249   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.62:42787
I20250114 20:52:52.880283  1443 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.62:42787 every 8 connection(s)
I20250114 20:52:52.881551   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:52.884891  1446 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.885032  1444 sys_catalog.cc:263] Verifying existing consensus state
W20250114 20:52:52.885809  1447 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.886382  1449 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.887001   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.887310   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.887406   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.887477   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972887448 us; error 0 us; skew 500 ppm
I20250114 20:52:52.887481  1444 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Bootstrap starting.
I20250114 20:52:52.887701   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.889191   416 webserver.cc:458] Webserver started at http://127.0.104.61:43213/ using document root <none> and password file <none>
I20250114 20:52:52.889477   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.889556   416 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:52:52.891791   416 fs_manager.cc:714] Time spent opening directory manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:52:52.893345  1455 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.893698   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:52.893821   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root
uuid: "2edbe75e77474dba9065d6d1b9ed00d3"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.893949   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.897472  1444 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=3 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:52:52.897987  1444 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Bootstrap complete.
I20250114 20:52:52.898772  1444 raft_consensus.cc:357] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.898981  1444 raft_consensus.cc:738] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1995c3cacc2422fb34c78c4232ebf49, State: Initialized, Role: FOLLOWER
I20250114 20:52:52.899226  1444 consensus_queue.cc:260] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.899828  1460 sys_catalog.cc:455] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.900053  1460 sys_catalog.cc:458] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.900174  1444 sys_catalog.cc:564] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:52.905643  1471 catalog_manager.cc:1260] Loaded cluster ID: 291a1c5112434d14b0d0900f78acfeaa
I20250114 20:52:52.905742  1471 catalog_manager.cc:1553] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: loading cluster ID for follower catalog manager: success
I20250114 20:52:52.907835  1471 catalog_manager.cc:1575] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: acquiring CA information for follower catalog manager: success
I20250114 20:52:52.909193  1471 catalog_manager.cc:1603] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:52:52.914328   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.914916   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.926343   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.61:34355
I20250114 20:52:52.926378  1518 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.61:34355 every 8 connection(s)
I20250114 20:52:52.927567   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:52.930199  1519 sys_catalog.cc:263] Verifying existing consensus state
W20250114 20:52:52.930943  1521 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.931533  1522 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:52.932016  1524 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:52.932447  1519 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: Bootstrap starting.
I20250114 20:52:52.932660   416 server_base.cc:1034] running on GCE node
I20250114 20:52:52.932937   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:52.933028   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:52.933087   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887972933061 us; error 0 us; skew 500 ppm
I20250114 20:52:52.933285   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:52.934597   416 webserver.cc:458] Webserver started at http://127.0.104.60:41699/ using document root <none> and password file <none>
I20250114 20:52:52.934835   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:52.934904   416 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:52:52.936726   416 fs_manager.cc:714] Time spent opening directory manager: real 0.001s	user 0.000s	sys 0.002s
I20250114 20:52:52.938138  1530 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:52.938546   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.001s
I20250114 20:52:52.938660   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root
uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:52.938783   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:52.940048  1519 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=3 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:52:52.940598  1519 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: Bootstrap complete.
I20250114 20:52:52.941344  1519 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.941526  1519 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2edbe75e77474dba9065d6d1b9ed00d3, State: Initialized, Role: FOLLOWER
I20250114 20:52:52.941777  1519 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.942394  1535 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.942670  1535 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.942698  1519 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:52.947710  1546 catalog_manager.cc:1260] Loaded cluster ID: 291a1c5112434d14b0d0900f78acfeaa
I20250114 20:52:52.947795  1546 catalog_manager.cc:1553] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: loading cluster ID for follower catalog manager: success
I20250114 20:52:52.949985  1546 catalog_manager.cc:1575] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: acquiring CA information for follower catalog manager: success
I20250114 20:52:52.951400  1546 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:52:52.967190   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:52.967775   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:52.980108   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.60:39313
I20250114 20:52:52.980139  1593 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.60:39313 every 8 connection(s)
I20250114 20:52:52.981079   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:52:52.981197   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
I20250114 20:52:52.981263   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:52:52.983321  1594 sys_catalog.cc:263] Verifying existing consensus state
I20250114 20:52:52.985251  1594 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: Bootstrap starting.
I20250114 20:52:52.992169  1594 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=3 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:52:52.992599  1594 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: Bootstrap complete.
I20250114 20:52:52.993352  1594 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.993525  1594 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2ef5ba4b0f1441ac8717dcd59d14d3ed, State: Initialized, Role: FOLLOWER
I20250114 20:52:52.993782  1594 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:52.994385  1597 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:52.994560  1594 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:52.994577  1597 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:52.998875   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:52.999680  1608 catalog_manager.cc:1260] Loaded cluster ID: 291a1c5112434d14b0d0900f78acfeaa
I20250114 20:52:52.999778  1608 catalog_manager.cc:1553] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: loading cluster ID for follower catalog manager: success
I20250114 20:52:53.002240  1608 catalog_manager.cc:1575] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: acquiring CA information for follower catalog manager: success
I20250114 20:52:53.004072  1608 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
W20250114 20:52:53.004745  1609 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:53.006843  1610 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.007547   416 server_base.cc:1034] running on GCE node
W20250114 20:52:53.008031  1612 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.008517   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:53.008611   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:53.008695   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887973008660 us; error 0 us; skew 500 ppm
I20250114 20:52:53.008889   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:53.010493   416 webserver.cc:458] Webserver started at http://127.0.104.1:38445/ using document root <none> and password file <none>
I20250114 20:52:53.010790   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:53.010886   416 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:52:53.013435   416 fs_manager.cc:714] Time spent opening directory manager: real 0.002s	user 0.000s	sys 0.003s
I20250114 20:52:53.015056  1617 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.015458   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.001s
I20250114 20:52:53.015584   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root
uuid: "a28114ddab024404aa7ab12174e6e367"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:53.015762   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:53.033077   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:53.033784   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:53.034427   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:53.035475   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:53.035557   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.035681   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:53.035740   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.051041   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.1:42973
I20250114 20:52:53.051071  1679 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.1:42973 every 8 connection(s)
I20250114 20:52:53.055450   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:53.060511  1690 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:53.061066  1691 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.065313   416 server_base.cc:1034] running on GCE node
W20250114 20:52:53.065681  1693 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.066140   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:53.066256   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:53.066325   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887973066300 us; error 0 us; skew 500 ppm
I20250114 20:52:53.066529   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:53.075503  1680 heartbeater.cc:346] Connected to a master server at 127.0.104.60:39313
I20250114 20:52:53.075693  1680 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.075814   416 webserver.cc:458] Webserver started at http://127.0.104.2:32961/ using document root <none> and password file <none>
I20250114 20:52:53.076177   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:53.076242  1680 heartbeater.cc:510] Master 127.0.104.60:39313 requested a full tablet report, sending...
I20250114 20:52:53.076355   416 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:52:53.076434  1682 heartbeater.cc:346] Connected to a master server at 127.0.104.61:34355
I20250114 20:52:53.076551  1682 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.077023  1682 heartbeater.cc:510] Master 127.0.104.61:34355 requested a full tablet report, sending...
I20250114 20:52:53.077570  1681 heartbeater.cc:346] Connected to a master server at 127.0.104.62:42787
I20250114 20:52:53.077690  1681 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.077749  1559 ts_manager.cc:194] Registered new tserver with Master: a28114ddab024404aa7ab12174e6e367 (127.0.104.1:42973)
I20250114 20:52:53.078086  1484 ts_manager.cc:194] Registered new tserver with Master: a28114ddab024404aa7ab12174e6e367 (127.0.104.1:42973)
I20250114 20:52:53.078152  1681 heartbeater.cc:510] Master 127.0.104.62:42787 requested a full tablet report, sending...
I20250114 20:52:53.079494   416 fs_manager.cc:714] Time spent opening directory manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:52:53.079541  1409 ts_manager.cc:194] Registered new tserver with Master: a28114ddab024404aa7ab12174e6e367 (127.0.104.1:42973)
I20250114 20:52:53.080958  1698 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.081321   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.000s
I20250114 20:52:53.081421   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root
uuid: "ea9d1a51dd7844dca8e37c003492afe7"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:53.081532   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:53.097155   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:53.097975   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:53.098670   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:53.099763   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:53.099867   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.099984   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:53.100064   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.115289   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.2:44653
I20250114 20:52:53.115320  1760 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.2:44653 every 8 connection(s)
I20250114 20:52:53.120795   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:53.139907  1761 heartbeater.cc:346] Connected to a master server at 127.0.104.60:39313
I20250114 20:52:53.140103  1761 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.140723  1761 heartbeater.cc:510] Master 127.0.104.60:39313 requested a full tablet report, sending...
W20250114 20:52:53.142167  1768 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.143265  1763 heartbeater.cc:346] Connected to a master server at 127.0.104.61:34355
I20250114 20:52:53.143520  1559 ts_manager.cc:194] Registered new tserver with Master: ea9d1a51dd7844dca8e37c003492afe7 (127.0.104.2:44653)
W20250114 20:52:53.143915  1769 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.144160  1763 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.145797  1763 heartbeater.cc:510] Master 127.0.104.61:34355 requested a full tablet report, sending...
I20250114 20:52:53.146972  1484 ts_manager.cc:194] Registered new tserver with Master: ea9d1a51dd7844dca8e37c003492afe7 (127.0.104.2:44653)
W20250114 20:52:53.147442  1771 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:53.147539  1762 heartbeater.cc:346] Connected to a master server at 127.0.104.62:42787
I20250114 20:52:53.147682  1762 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.148010  1762 heartbeater.cc:510] Master 127.0.104.62:42787 requested a full tablet report, sending...
I20250114 20:52:53.148061   416 server_base.cc:1034] running on GCE node
I20250114 20:52:53.148401   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:53.148469   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:53.148519   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887973148491 us; error 0 us; skew 500 ppm
I20250114 20:52:53.148635   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:53.149040  1409 ts_manager.cc:194] Registered new tserver with Master: ea9d1a51dd7844dca8e37c003492afe7 (127.0.104.2:44653)
I20250114 20:52:53.150035   416 webserver.cc:458] Webserver started at http://127.0.104.3:46805/ using document root <none> and password file <none>
I20250114 20:52:53.150239   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:53.150305   416 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:52:53.152285   416 fs_manager.cc:714] Time spent opening directory manager: real 0.002s	user 0.000s	sys 0.002s
I20250114 20:52:53.154069  1776 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.154588   416 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.000s	sys 0.001s
I20250114 20:52:53.154685   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root
uuid: "c4377648ba6740e8a75602fb58a7e205"
format_stamp: "Formatted at 2025-01-14 20:52:52 on dist-test-slave-npjh"
I20250114 20:52:53.154843   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskClusterRestart.1736887971271855-416-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:53.187899   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:53.188553   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:53.189126   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:53.189962   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:53.190032   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.190145   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:53.190197   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:53.205709   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.3:35247
I20250114 20:52:53.205732  1838 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.3:35247 every 8 connection(s)
I20250114 20:52:53.218815  1839 heartbeater.cc:346] Connected to a master server at 127.0.104.60:39313
I20250114 20:52:53.219054  1839 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.219687  1839 heartbeater.cc:510] Master 127.0.104.60:39313 requested a full tablet report, sending...
I20250114 20:52:53.221169  1559 ts_manager.cc:194] Registered new tserver with Master: c4377648ba6740e8a75602fb58a7e205 (127.0.104.3:35247)
I20250114 20:52:53.222872  1840 heartbeater.cc:346] Connected to a master server at 127.0.104.62:42787
I20250114 20:52:53.223055  1840 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.223558  1840 heartbeater.cc:510] Master 127.0.104.62:42787 requested a full tablet report, sending...
I20250114 20:52:53.223976  1841 heartbeater.cc:346] Connected to a master server at 127.0.104.61:34355
I20250114 20:52:53.224138  1841 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:53.224645  1841 heartbeater.cc:510] Master 127.0.104.61:34355 requested a full tablet report, sending...
I20250114 20:52:53.224773  1409 ts_manager.cc:194] Registered new tserver with Master: c4377648ba6740e8a75602fb58a7e205 (127.0.104.3:35247)
I20250114 20:52:53.225484  1484 ts_manager.cc:194] Registered new tserver with Master: c4377648ba6740e8a75602fb58a7e205 (127.0.104.3:35247)
I20250114 20:52:53.225732   416 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.005831416s
I20250114 20:52:54.126441  1847 raft_consensus.cc:491] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:52:54.126716  1847 raft_consensus.cc:513] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:54.127640  1847 leader_election.cc:290] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 2edbe75e77474dba9065d6d1b9ed00d3 (127.0.104.61:34355), 2ef5ba4b0f1441ac8717dcd59d14d3ed (127.0.104.60:39313)
I20250114 20:52:54.144119  1494 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" is_pre_election: true
I20250114 20:52:54.144374  1494 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 1.
I20250114 20:52:54.144886  1393 leader_election.cc:304] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2edbe75e77474dba9065d6d1b9ed00d3, e1995c3cacc2422fb34c78c4232ebf49; no voters: 
I20250114 20:52:54.144861  1569 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" is_pre_election: true
I20250114 20:52:54.145123  1569 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 1.
I20250114 20:52:54.145306  1847 raft_consensus.cc:2798] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:52:54.145439  1847 raft_consensus.cc:491] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:52:54.145558  1847 raft_consensus.cc:3054] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:52:54.147734  1847 raft_consensus.cc:513] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:54.148448  1847 leader_election.cc:290] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 2 election: Requested vote from peers 2edbe75e77474dba9065d6d1b9ed00d3 (127.0.104.61:34355), 2ef5ba4b0f1441ac8717dcd59d14d3ed (127.0.104.60:39313)
I20250114 20:52:54.148736  1494 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "2edbe75e77474dba9065d6d1b9ed00d3"
I20250114 20:52:54.148943  1494 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:52:54.149010  1569 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "e1995c3cacc2422fb34c78c4232ebf49" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed"
I20250114 20:52:54.149181  1569 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:52:54.151521  1494 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 2.
I20250114 20:52:54.151624  1569 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1995c3cacc2422fb34c78c4232ebf49 in term 2.
I20250114 20:52:54.151899  1393 leader_election.cc:304] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2edbe75e77474dba9065d6d1b9ed00d3, e1995c3cacc2422fb34c78c4232ebf49; no voters: 
I20250114 20:52:54.152181  1847 raft_consensus.cc:2798] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:52:54.152603  1847 raft_consensus.cc:695] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 2 LEADER]: Becoming Leader. State: Replica: e1995c3cacc2422fb34c78c4232ebf49, State: Running, Role: LEADER
I20250114 20:52:54.152889  1847 consensus_queue.cc:237] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } }
I20250114 20:52:54.154505  1852 sys_catalog.cc:455] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e1995c3cacc2422fb34c78c4232ebf49. Latest consensus state: current_term: 2 leader_uuid: "e1995c3cacc2422fb34c78c4232ebf49" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e1995c3cacc2422fb34c78c4232ebf49" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 42787 } } peers { permanent_uuid: "2edbe75e77474dba9065d6d1b9ed00d3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 34355 } } peers { permanent_uuid: "2ef5ba4b0f1441ac8717dcd59d14d3ed" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 39313 } } }
I20250114 20:52:54.154714  1852 sys_catalog.cc:458] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:54.155259  1854 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:52:54.157020  1854 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:52:54.157824  1854 catalog_manager.cc:1260] Loaded cluster ID: 291a1c5112434d14b0d0900f78acfeaa
I20250114 20:52:54.157903  1854 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:52:54.159001  1854 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:52:54.159780  1854 catalog_manager.cc:5910] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Loaded TSK: 0
I20250114 20:52:54.160984  1854 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:52:54.227325  1409 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:57164
I20250114 20:52:54.245101   416 tablet_server.cc:178] TabletServer@127.0.104.1:0 shutting down...
I20250114 20:52:54.253037   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:54.266325   416 tablet_server.cc:195] TabletServer@127.0.104.1:0 shutdown complete.
I20250114 20:52:54.269043   416 tablet_server.cc:178] TabletServer@127.0.104.2:0 shutting down...
I20250114 20:52:54.277113   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:54.289913   416 tablet_server.cc:195] TabletServer@127.0.104.2:0 shutdown complete.
I20250114 20:52:54.292429   416 tablet_server.cc:178] TabletServer@127.0.104.3:0 shutting down...
I20250114 20:52:54.300535   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:54.313444   416 tablet_server.cc:195] TabletServer@127.0.104.3:0 shutdown complete.
I20250114 20:52:54.316051   416 master.cc:537] Master@127.0.104.62:42787 shutting down...
I20250114 20:52:54.322875   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:52:54.323174   416 pending_rounds.cc:62] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Trying to abort 1 pending ops.
I20250114 20:52:54.323334   416 pending_rounds.cc:69] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49: Aborting op as it isn't in flight: id { term: 2 index: 5 } timestamp: 7114293142132809728 op_type: NO_OP noop_request { }
I20250114 20:52:54.323482   416 raft_consensus.cc:2883] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 2 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:52:54.323611   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P e1995c3cacc2422fb34c78c4232ebf49 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:54.323767   416 tablet_replica.cc:331] stopping tablet replica
I20250114 20:52:54.337563   416 master.cc:559] Master@127.0.104.62:42787 shutdown complete.
I20250114 20:52:54.341476   416 master.cc:537] Master@127.0.104.61:34355 shutting down...
I20250114 20:52:54.348315   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:54.348546   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 2edbe75e77474dba9065d6d1b9ed00d3 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:54.348686   416 tablet_replica.cc:331] stopping tablet replica
I20250114 20:52:54.361835   416 master.cc:559] Master@127.0.104.61:34355 shutdown complete.
I20250114 20:52:54.365343   416 master.cc:537] Master@127.0.104.60:39313 shutting down...
I20250114 20:52:54.372185   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:54.372431   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 2ef5ba4b0f1441ac8717dcd59d14d3ed [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:54.372531   416 tablet_replica.cc:331] stopping tablet replica
I20250114 20:52:54.386520   416 master.cc:559] Master@127.0.104.60:39313 shutdown complete.
[       OK ] TokenSignerITest.TskClusterRestart (2160 ms)
[ RUN      ] TokenSignerITest.TskMasterLeadershipChange
I20250114 20:52:54.397140   416 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.0.104.62:45789,127.0.104.61:42093,127.0.104.60:36427
I20250114 20:52:54.397727   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:54.400923  1855 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:54.401553  1856 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:54.402112  1858 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.403088   416 server_base.cc:1034] running on GCE node
I20250114 20:52:54.403369   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:54.403435   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:54.403506   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887974403473 us; error 0 us; skew 500 ppm
I20250114 20:52:54.403666   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:54.404984   416 webserver.cc:458] Webserver started at http://127.0.104.62:39649/ using document root <none> and password file <none>
I20250114 20:52:54.405215   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:54.405310   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:54.405438   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:54.406100   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-0-root/instance:
uuid: "41b3ad6c440944dbbedfdbd441f9a081"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.408520   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:52:54.410104  1863 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.410452   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.001s
I20250114 20:52:54.410565   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-0-root
uuid: "41b3ad6c440944dbbedfdbd441f9a081"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.410703   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:54.429350   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:54.429961   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:54.442414   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.62:45789
I20250114 20:52:54.442452  1914 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.62:45789 every 8 connection(s)
I20250114 20:52:54.443711  1915 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:54.443756   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:54.446810  1915 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:54.447206  1917 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:54.448113  1918 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:54.448848  1920 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.449486   416 server_base.cc:1034] running on GCE node
I20250114 20:52:54.449776   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:54.449862   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:54.449924   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887974449890 us; error 0 us; skew 500 ppm
I20250114 20:52:54.450103   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:54.451560   416 webserver.cc:458] Webserver started at http://127.0.104.61:43965/ using document root <none> and password file <none>
I20250114 20:52:54.451793   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:54.451890   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:54.452020   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:54.452901   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-1-root/instance:
uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.454330  1915 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:54.455772   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.003s	sys 0.000s
W20250114 20:52:54.456198  1915 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.61:42093: Network error: Client connection negotiation failed: client connection to 127.0.104.61:42093: connect: Connection refused (error 111)
I20250114 20:52:54.457417  1928 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.457784   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.001s
I20250114 20:52:54.457880   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-1-root
uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.457989   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:54.477722   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:54.478313   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:54.489892   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.61:42093
I20250114 20:52:54.489933  1979 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.61:42093 every 8 connection(s)
I20250114 20:52:54.490437  1915 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } attempt: 1
I20250114 20:52:54.491416   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:54.491488  1980 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
W20250114 20:52:54.494963  1983 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:54.495544  1984 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.495844  1980 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:54.498163  1986 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.502490  1915 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:54.503196   416 server_base.cc:1034] running on GCE node
I20250114 20:52:54.503651   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:54.503732   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:54.503805   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887974503765 us; error 0 us; skew 500 ppm
I20250114 20:52:54.504004   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
W20250114 20:52:54.504379  1915 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:36427: Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:54.505299   416 webserver.cc:458] Webserver started at http://127.0.104.60:45541/ using document root <none> and password file <none>
I20250114 20:52:54.505527   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:54.505656   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:54.505786   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:54.506512   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-2-root/instance:
uuid: "576ddd9df26d4dbf8115bf6de5c5fea6"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.507460  1980 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:54.509629   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.004s	sys 0.000s
I20250114 20:52:54.511592  1992 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.511993   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.001s
I20250114 20:52:54.512146   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-2-root
uuid: "576ddd9df26d4dbf8115bf6de5c5fea6"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.512356   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:54.513442  1980 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:54.515362  1980 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:36427: Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:54.536435   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:54.537007   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:54.537655  1915 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } attempt: 1
W20250114 20:52:54.539462  1915 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:36427: Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:54.548691   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.60:36427
I20250114 20:52:54.548728  2044 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.60:36427 every 8 connection(s)
I20250114 20:52:54.549628   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:52:54.550043  2045 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:54.552357  2045 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:54.558214  2045 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:54.560561  1980 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } attempt: 1
I20250114 20:52:54.564901  2045 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:54.569648  1980 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: Bootstrap starting.
I20250114 20:52:54.571058  1980 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:54.573006  1980 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: No bootstrap required, opened a new log
I20250114 20:52:54.573833  1980 raft_consensus.cc:357] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.574074  1980 raft_consensus.cc:383] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:54.574151  1980 raft_consensus.cc:738] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b1e7d89ed71e45f1a2c335e8cbb921cd, State: Initialized, Role: FOLLOWER
I20250114 20:52:54.574396  1980 consensus_queue.cc:260] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.574689  2045 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6: Bootstrap starting.
I20250114 20:52:54.575047  2051 sys_catalog.cc:455] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.575233  2051 sys_catalog.cc:458] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.575333  1980 sys_catalog.cc:564] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:54.576555  2045 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:54.578492  2045 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6: No bootstrap required, opened a new log
I20250114 20:52:54.579432  2045 raft_consensus.cc:357] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.579641  2045 raft_consensus.cc:383] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:54.579730  2045 raft_consensus.cc:738] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 576ddd9df26d4dbf8115bf6de5c5fea6, State: Initialized, Role: FOLLOWER
I20250114 20:52:54.579972  2045 consensus_queue.cc:260] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
W20250114 20:52:54.580487  2063 catalog_manager.cc:1559] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:54.580595  2063 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:54.580602  2064 sys_catalog.cc:455] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.580863  2064 sys_catalog.cc:458] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.580902  2045 sys_catalog.cc:564] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: configured and running, proceeding with master startup.
W20250114 20:52:54.585171  2075 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:54.585256  2075 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:54.603710  1915 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } attempt: 2
I20250114 20:52:54.611311  1915 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: Bootstrap starting.
I20250114 20:52:54.612888  1915 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:54.614650  1915 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: No bootstrap required, opened a new log
I20250114 20:52:54.615306  1915 raft_consensus.cc:357] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.615496  1915 raft_consensus.cc:383] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:54.615568  1915 raft_consensus.cc:738] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 41b3ad6c440944dbbedfdbd441f9a081, State: Initialized, Role: FOLLOWER
I20250114 20:52:54.615790  1915 consensus_queue.cc:260] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.616362  2077 sys_catalog.cc:455] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.616541  2077 sys_catalog.cc:458] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.616631  1915 sys_catalog.cc:564] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:54.620296   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
I20250114 20:52:54.620426   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:52:54.621120   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:54.621322  2088 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:54.621412  2088 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:52:54.624660  2089 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.625700   416 server_base.cc:1034] running on GCE node
W20250114 20:52:54.626223  2092 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:54.626358  2090 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.626703   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:54.626770   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:54.626825   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887974626808 us; error 0 us; skew 500 ppm
I20250114 20:52:54.626986   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:54.628160   416 webserver.cc:458] Webserver started at http://127.0.104.1:37907/ using document root <none> and password file <none>
I20250114 20:52:54.628408   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:54.628491   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:54.628630   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:54.629244   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-0-root/instance:
uuid: "8b50c259f69b45d2bec7f54a74f83cb6"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.631448   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:52:54.632969  2097 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.633268   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:54.633394   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-0-root
uuid: "8b50c259f69b45d2bec7f54a74f83cb6"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.633507   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:54.650444   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:54.651072   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:54.652132   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:54.653092   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:54.653180   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.653281   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:54.653352   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.667328   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.1:45233
I20250114 20:52:54.667362  2159 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.1:45233 every 8 connection(s)
I20250114 20:52:54.670259   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:54.679376  2162 heartbeater.cc:346] Connected to a master server at 127.0.104.61:42093
I20250114 20:52:54.679605  2162 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.681102  2162 heartbeater.cc:510] Master 127.0.104.61:42093 requested a full tablet report, sending...
I20250114 20:52:54.681154  2160 heartbeater.cc:346] Connected to a master server at 127.0.104.60:36427
I20250114 20:52:54.681738  2160 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.682220  2160 heartbeater.cc:510] Master 127.0.104.60:36427 requested a full tablet report, sending...
I20250114 20:52:54.682503  1945 ts_manager.cc:194] Registered new tserver with Master: 8b50c259f69b45d2bec7f54a74f83cb6 (127.0.104.1:45233)
I20250114 20:52:54.683264  2010 ts_manager.cc:194] Registered new tserver with Master: 8b50c259f69b45d2bec7f54a74f83cb6 (127.0.104.1:45233)
W20250114 20:52:54.684160  2167 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.686394  2161 heartbeater.cc:346] Connected to a master server at 127.0.104.62:45789
I20250114 20:52:54.686582  2161 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.687100  2161 heartbeater.cc:510] Master 127.0.104.62:45789 requested a full tablet report, sending...
W20250114 20:52:54.688086  2168 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.688144  1880 ts_manager.cc:194] Registered new tserver with Master: 8b50c259f69b45d2bec7f54a74f83cb6 (127.0.104.1:45233)
W20250114 20:52:54.689143  2170 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.689862   416 server_base.cc:1034] running on GCE node
I20250114 20:52:54.690117   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:54.690181   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:54.690250   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887974690219 us; error 0 us; skew 500 ppm
I20250114 20:52:54.690395   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:54.692065   416 webserver.cc:458] Webserver started at http://127.0.104.2:36111/ using document root <none> and password file <none>
I20250114 20:52:54.692303   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:54.692404   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:54.692529   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:54.693150   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-1-root/instance:
uuid: "421425bed8ce448cb1a2e94e14e35210"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.695406   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:52:54.696979  2175 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.697299   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.000s
I20250114 20:52:54.697407   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-1-root
uuid: "421425bed8ce448cb1a2e94e14e35210"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.697530   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:54.713151   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:54.713793   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:54.714370   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:54.715727   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:54.715801   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.715914   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:54.715966   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.731595   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.2:33407
I20250114 20:52:54.731631  2237 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.2:33407 every 8 connection(s)
I20250114 20:52:54.733846   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:54.745450  2238 heartbeater.cc:346] Connected to a master server at 127.0.104.60:36427
I20250114 20:52:54.745628  2238 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.746302  2238 heartbeater.cc:510] Master 127.0.104.60:36427 requested a full tablet report, sending...
W20250114 20:52:54.746963  2245 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.748584  2010 ts_manager.cc:194] Registered new tserver with Master: 421425bed8ce448cb1a2e94e14e35210 (127.0.104.2:33407)
I20250114 20:52:54.749512  2240 heartbeater.cc:346] Connected to a master server at 127.0.104.61:42093
I20250114 20:52:54.749701  2240 heartbeater.cc:463] Registering TS with master...
W20250114 20:52:54.750798  2246 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.752467  2240 heartbeater.cc:510] Master 127.0.104.61:42093 requested a full tablet report, sending...
I20250114 20:52:54.753645  1945 ts_manager.cc:194] Registered new tserver with Master: 421425bed8ce448cb1a2e94e14e35210 (127.0.104.2:33407)
I20250114 20:52:54.754423  2239 heartbeater.cc:346] Connected to a master server at 127.0.104.62:45789
I20250114 20:52:54.754541  2239 heartbeater.cc:463] Registering TS with master...
W20250114 20:52:54.754946  2249 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:54.755069  2239 heartbeater.cc:510] Master 127.0.104.62:45789 requested a full tablet report, sending...
I20250114 20:52:54.755925   416 server_base.cc:1034] running on GCE node
I20250114 20:52:54.756081  1880 ts_manager.cc:194] Registered new tserver with Master: 421425bed8ce448cb1a2e94e14e35210 (127.0.104.2:33407)
I20250114 20:52:54.756222   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:54.756341   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:54.756408   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887974756387 us; error 0 us; skew 500 ppm
I20250114 20:52:54.756605   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:54.757943   416 webserver.cc:458] Webserver started at http://127.0.104.3:36259/ using document root <none> and password file <none>
I20250114 20:52:54.758169   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:54.758260   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:54.758432   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:54.758411  2064 raft_consensus.cc:491] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:52:54.758559  2064 raft_consensus.cc:513] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.759337  2064 leader_election.cc:290] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 41b3ad6c440944dbbedfdbd441f9a081 (127.0.104.62:45789), b1e7d89ed71e45f1a2c335e8cbb921cd (127.0.104.61:42093)
I20250114 20:52:54.759315   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-2-root/instance:
uuid: "9c9ac2d59922424bbb8599ea3086f494"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.759747  1890 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "41b3ad6c440944dbbedfdbd441f9a081" is_pre_election: true
I20250114 20:52:54.759776  1955 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" is_pre_election: true
I20250114 20:52:54.759943  1890 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 576ddd9df26d4dbf8115bf6de5c5fea6 in term 0.
I20250114 20:52:54.759999  1955 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 576ddd9df26d4dbf8115bf6de5c5fea6 in term 0.
I20250114 20:52:54.760434  1995 leader_election.cc:304] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 576ddd9df26d4dbf8115bf6de5c5fea6, b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 
I20250114 20:52:54.760696  2064 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:52:54.760804  2064 raft_consensus.cc:491] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:52:54.760871  2064 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:54.762392   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.002s	sys 0.000s
I20250114 20:52:54.762895  2064 raft_consensus.cc:513] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.763396  2064 leader_election.cc:290] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [CANDIDATE]: Term 1 election: Requested vote from peers 41b3ad6c440944dbbedfdbd441f9a081 (127.0.104.62:45789), b1e7d89ed71e45f1a2c335e8cbb921cd (127.0.104.61:42093)
I20250114 20:52:54.763811  1890 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "41b3ad6c440944dbbedfdbd441f9a081"
I20250114 20:52:54.763964  1955 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd"
I20250114 20:52:54.764006  1890 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:54.764112  1955 raft_consensus.cc:3054] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:54.764685  2253 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.765093   416 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.001s
I20250114 20:52:54.765249   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-2-root
uuid: "9c9ac2d59922424bbb8599ea3086f494"
format_stamp: "Formatted at 2025-01-14 20:52:54 on dist-test-slave-npjh"
I20250114 20:52:54.765369   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.TskMasterLeadershipChange.1736887971271855-416-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:54.767571  1955 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 576ddd9df26d4dbf8115bf6de5c5fea6 in term 1.
I20250114 20:52:54.767594  1890 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 576ddd9df26d4dbf8115bf6de5c5fea6 in term 1.
I20250114 20:52:54.768147  1995 leader_election.cc:304] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 576ddd9df26d4dbf8115bf6de5c5fea6, b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 
I20250114 20:52:54.768510  2064 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:52:54.768927  2064 raft_consensus.cc:695] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 1 LEADER]: Becoming Leader. State: Replica: 576ddd9df26d4dbf8115bf6de5c5fea6, State: Running, Role: LEADER
I20250114 20:52:54.769246  2064 consensus_queue.cc:237] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:54.770548  2258 sys_catalog.cc:455] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 576ddd9df26d4dbf8115bf6de5c5fea6. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.770821  2258 sys_catalog.cc:458] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:54.771373  2260 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:52:54.772893  2260 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:52:54.775789  1955 raft_consensus.cc:1270] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Refusing update from remote peer 576ddd9df26d4dbf8115bf6de5c5fea6: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:54.776373  1890 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Refusing update from remote peer 576ddd9df26d4dbf8115bf6de5c5fea6: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:54.776710  2064 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:54.776978  2258 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [LEADER]: Connected to new peer: Peer: permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:54.785226   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:54.785844   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:54.786479   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:54.787484   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:54.787578   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.787685   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:54.787766   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:54.791186  2051 sys_catalog.cc:455] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: SysCatalogTable state changed. Reason: New leader 576ddd9df26d4dbf8115bf6de5c5fea6. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.791519  2051 sys_catalog.cc:458] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.791875  2077 sys_catalog.cc:455] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 576ddd9df26d4dbf8115bf6de5c5fea6. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.792121  2077 sys_catalog.cc:458] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.794013  2064 sys_catalog.cc:455] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.794304  2064 sys_catalog.cc:458] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:54.794543  2064 sys_catalog.cc:455] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.794704  2064 sys_catalog.cc:458] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:54.797479  2260 catalog_manager.cc:1348] Generated new cluster ID: 4505bafd47054dbab844f0d67b5c3f37
I20250114 20:52:54.797600  2260 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:52:54.800762  2077 sys_catalog.cc:455] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.801023  2077 sys_catalog.cc:458] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.803370  2051 sys_catalog.cc:455] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:54.803648  2051 sys_catalog.cc:458] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:54.823688   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.3:44615
I20250114 20:52:54.823729  2327 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.3:44615 every 8 connection(s)
I20250114 20:52:54.838747  2329 heartbeater.cc:346] Connected to a master server at 127.0.104.62:45789
I20250114 20:52:54.838969  2329 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.840091  2329 heartbeater.cc:510] Master 127.0.104.62:45789 requested a full tablet report, sending...
I20250114 20:52:54.842671  1880 ts_manager.cc:194] Registered new tserver with Master: 9c9ac2d59922424bbb8599ea3086f494 (127.0.104.3:44615)
I20250114 20:52:54.846378  2328 heartbeater.cc:346] Connected to a master server at 127.0.104.60:36427
I20250114 20:52:54.846524  2328 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.847153  2328 heartbeater.cc:510] Master 127.0.104.60:36427 requested a full tablet report, sending...
I20250114 20:52:54.848023  2010 ts_manager.cc:194] Registered new tserver with Master: 9c9ac2d59922424bbb8599ea3086f494 (127.0.104.3:44615)
I20250114 20:52:54.848621  2260 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:52:54.849363  2260 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:52:54.853459  2330 heartbeater.cc:346] Connected to a master server at 127.0.104.61:42093
I20250114 20:52:54.853598  2330 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:54.854557  2330 heartbeater.cc:510] Master 127.0.104.61:42093 requested a full tablet report, sending...
I20250114 20:52:54.855607  1945 ts_manager.cc:194] Registered new tserver with Master: 9c9ac2d59922424bbb8599ea3086f494 (127.0.104.3:44615)
I20250114 20:52:54.856123   416 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.018833854s
I20250114 20:52:54.860410  2260 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6: Generated new TSK 0
I20250114 20:52:54.860689  2260 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:52:54.961153   416 master.cc:537] Master@127.0.104.60:36427 shutting down...
I20250114 20:52:54.967945   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:52:54.968350   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:54.968556   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 576ddd9df26d4dbf8115bf6de5c5fea6: stopping tablet replica
I20250114 20:52:54.973425   416 master.cc:559] Master@127.0.104.60:36427 shutdown complete.
I20250114 20:52:55.581881  2063 catalog_manager.cc:1260] Loaded cluster ID: 4505bafd47054dbab844f0d67b5c3f37
I20250114 20:52:55.581990  2063 catalog_manager.cc:1553] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: loading cluster ID for follower catalog manager: success
I20250114 20:52:55.583871  2063 catalog_manager.cc:1575] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: acquiring CA information for follower catalog manager: success
I20250114 20:52:55.584903  2063 catalog_manager.cc:1603] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:52:55.622749  2088 catalog_manager.cc:1260] Loaded cluster ID: 4505bafd47054dbab844f0d67b5c3f37
I20250114 20:52:55.622855  2088 catalog_manager.cc:1553] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: loading cluster ID for follower catalog manager: success
I20250114 20:52:55.624718  2088 catalog_manager.cc:1575] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: acquiring CA information for follower catalog manager: success
I20250114 20:52:55.625722  2088 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
W20250114 20:52:55.685461  2160 heartbeater.cc:643] Failed to heartbeat to 127.0.104.60:36427 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:56.360738  2339 raft_consensus.cc:491] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 576ddd9df26d4dbf8115bf6de5c5fea6)
I20250114 20:52:56.360963  2339 raft_consensus.cc:513] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:56.361225  2340 raft_consensus.cc:491] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 576ddd9df26d4dbf8115bf6de5c5fea6)
I20250114 20:52:56.361413  2340 raft_consensus.cc:513] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:56.361740  2339 leader_election.cc:290] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers b1e7d89ed71e45f1a2c335e8cbb921cd (127.0.104.61:42093), 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427)
I20250114 20:52:56.362102  2340 leader_election.cc:290] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 41b3ad6c440944dbbedfdbd441f9a081 (127.0.104.62:45789), 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427)
I20250114 20:52:56.362519  1890 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "41b3ad6c440944dbbedfdbd441f9a081" is_pre_election: true
I20250114 20:52:56.362758  1890 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b1e7d89ed71e45f1a2c335e8cbb921cd in term 1.
I20250114 20:52:56.363364  1955 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "41b3ad6c440944dbbedfdbd441f9a081" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" is_pre_election: true
I20250114 20:52:56.363581  1955 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 41b3ad6c440944dbbedfdbd441f9a081 in term 1.
I20250114 20:52:56.363988  1866 leader_election.cc:304] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 41b3ad6c440944dbbedfdbd441f9a081, b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 
W20250114 20:52:56.364169  1864 leader_election.cc:336] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427): Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:56.364318  1929 leader_election.cc:304] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 41b3ad6c440944dbbedfdbd441f9a081, b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 
I20250114 20:52:56.364391  2339 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:52:56.364488  2339 raft_consensus.cc:491] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Starting leader election (detected failure of leader 576ddd9df26d4dbf8115bf6de5c5fea6)
I20250114 20:52:56.364591  2339 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 1 FOLLOWER]: Advancing to term 2
W20250114 20:52:56.364571  1929 leader_election.cc:336] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427): Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:56.364580  2340 raft_consensus.cc:2798] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:52:56.364761  2340 raft_consensus.cc:491] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Starting leader election (detected failure of leader 576ddd9df26d4dbf8115bf6de5c5fea6)
I20250114 20:52:56.364864  2340 raft_consensus.cc:3054] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:52:56.366835  2339 raft_consensus.cc:513] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:56.366858  2340 raft_consensus.cc:513] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:56.367340  2339 leader_election.cc:290] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 2 election: Requested vote from peers b1e7d89ed71e45f1a2c335e8cbb921cd (127.0.104.61:42093), 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427)
I20250114 20:52:56.367409  2340 leader_election.cc:290] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [CANDIDATE]: Term 2 election: Requested vote from peers 41b3ad6c440944dbbedfdbd441f9a081 (127.0.104.62:45789), 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427)
I20250114 20:52:56.367797  1890 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "41b3ad6c440944dbbedfdbd441f9a081"
I20250114 20:52:56.368105  1890 raft_consensus.cc:2388] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate b1e7d89ed71e45f1a2c335e8cbb921cd in current term 2: Already voted for candidate 41b3ad6c440944dbbedfdbd441f9a081 in this term.
I20250114 20:52:56.368189  1955 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "41b3ad6c440944dbbedfdbd441f9a081" candidate_term: 2 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd"
I20250114 20:52:56.368425  1955 raft_consensus.cc:2388] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 41b3ad6c440944dbbedfdbd441f9a081 in current term 2: Already voted for candidate b1e7d89ed71e45f1a2c335e8cbb921cd in this term.
W20250114 20:52:56.368790  1864 leader_election.cc:336] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427): Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:56.369113  1866 leader_election.cc:304] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 41b3ad6c440944dbbedfdbd441f9a081; no voters: 576ddd9df26d4dbf8115bf6de5c5fea6, b1e7d89ed71e45f1a2c335e8cbb921cd
W20250114 20:52:56.369110  1929 leader_election.cc:336] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427): Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:56.369264  1929 leader_election.cc:304] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 41b3ad6c440944dbbedfdbd441f9a081, 576ddd9df26d4dbf8115bf6de5c5fea6
I20250114 20:52:56.369398  2339 raft_consensus.cc:2743] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20250114 20:52:56.369604  2340 raft_consensus.cc:2743] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20250114 20:52:58.085413  2351 raft_consensus.cc:491] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:52:58.085605  2351 raft_consensus.cc:513] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:58.086266  2351 leader_election.cc:290] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers b1e7d89ed71e45f1a2c335e8cbb921cd (127.0.104.61:42093), 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427)
I20250114 20:52:58.086813  1955 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "41b3ad6c440944dbbedfdbd441f9a081" candidate_term: 3 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" is_pre_election: true
I20250114 20:52:58.087061  1955 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 41b3ad6c440944dbbedfdbd441f9a081 in term 2.
W20250114 20:52:58.087312  1864 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111) [suppressed 13 similar messages]
I20250114 20:52:58.087553  1866 leader_election.cc:304] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 41b3ad6c440944dbbedfdbd441f9a081, b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 
I20250114 20:52:58.087817  2351 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250114 20:52:58.087961  2351 raft_consensus.cc:491] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:52:58.088068  2351 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 2 FOLLOWER]: Advancing to term 3
W20250114 20:52:58.088752  1864 leader_election.cc:336] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427): Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:58.090204  2351 raft_consensus.cc:513] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:58.090684  2351 leader_election.cc:290] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 3 election: Requested vote from peers b1e7d89ed71e45f1a2c335e8cbb921cd (127.0.104.61:42093), 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427)
I20250114 20:52:58.091037  1955 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "41b3ad6c440944dbbedfdbd441f9a081" candidate_term: 3 candidate_status { last_received { term: 1 index: 4 } } ignore_live_leader: false dest_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd"
I20250114 20:52:58.091250  1955 raft_consensus.cc:3054] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 2 FOLLOWER]: Advancing to term 3
W20250114 20:52:58.091861  1864 leader_election.cc:336] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 576ddd9df26d4dbf8115bf6de5c5fea6 (127.0.104.60:36427): Network error: Client connection negotiation failed: client connection to 127.0.104.60:36427: connect: Connection refused (error 111)
I20250114 20:52:58.093328  1955 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 41b3ad6c440944dbbedfdbd441f9a081 in term 3.
I20250114 20:52:58.093703  1866 leader_election.cc:304] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 41b3ad6c440944dbbedfdbd441f9a081, b1e7d89ed71e45f1a2c335e8cbb921cd; no voters: 576ddd9df26d4dbf8115bf6de5c5fea6
I20250114 20:52:58.093956  2351 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 3 FOLLOWER]: Leader election won for term 3
I20250114 20:52:58.094297  2351 raft_consensus.cc:695] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 3 LEADER]: Becoming Leader. State: Replica: 41b3ad6c440944dbbedfdbd441f9a081, State: Running, Role: LEADER
I20250114 20:52:58.094558  2351 consensus_queue.cc:237] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } }
I20250114 20:52:58.095693  2354 sys_catalog.cc:455] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 41b3ad6c440944dbbedfdbd441f9a081. Latest consensus state: current_term: 3 leader_uuid: "41b3ad6c440944dbbedfdbd441f9a081" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "41b3ad6c440944dbbedfdbd441f9a081" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 45789 } } peers { permanent_uuid: "b1e7d89ed71e45f1a2c335e8cbb921cd" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 42093 } } peers { permanent_uuid: "576ddd9df26d4dbf8115bf6de5c5fea6" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 36427 } } }
I20250114 20:52:58.095933  2354 sys_catalog.cc:458] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:58.096434  2356 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:52:58.097950  2356 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:52:58.098717  2356 catalog_manager.cc:1260] Loaded cluster ID: 4505bafd47054dbab844f0d67b5c3f37
I20250114 20:52:58.098790  2356 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:52:58.099858  2356 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:52:58.100682  2356 catalog_manager.cc:5910] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: Loaded TSK: 0
I20250114 20:52:58.101045  2356 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:52:58.194700   416 tablet_server.cc:178] TabletServer@127.0.104.1:0 shutting down...
I20250114 20:52:58.202674   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:58.217573   416 tablet_server.cc:195] TabletServer@127.0.104.1:0 shutdown complete.
I20250114 20:52:58.220595   416 tablet_server.cc:178] TabletServer@127.0.104.2:0 shutting down...
I20250114 20:52:58.228600   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:58.241403   416 tablet_server.cc:195] TabletServer@127.0.104.2:0 shutdown complete.
I20250114 20:52:58.244050   416 tablet_server.cc:178] TabletServer@127.0.104.3:0 shutting down...
I20250114 20:52:58.251670   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:52:58.264640   416 tablet_server.cc:195] TabletServer@127.0.104.3:0 shutdown complete.
I20250114 20:52:58.267472   416 master.cc:537] Master@127.0.104.62:45789 shutting down...
I20250114 20:52:58.274559   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 3 LEADER]: Raft consensus shutting down.
I20250114 20:52:58.274894   416 pending_rounds.cc:62] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: Trying to abort 1 pending ops.
I20250114 20:52:58.274962   416 pending_rounds.cc:69] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: Aborting op as it isn't in flight: id { term: 3 index: 5 } timestamp: 7114293158277332992 op_type: NO_OP noop_request { }
I20250114 20:52:58.275121   416 raft_consensus.cc:2883] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 3 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:52:58.275246   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:58.275429   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P 41b3ad6c440944dbbedfdbd441f9a081: stopping tablet replica
I20250114 20:52:58.293772   416 master.cc:559] Master@127.0.104.62:45789 shutdown complete.
I20250114 20:52:58.298274   416 master.cc:537] Master@127.0.104.61:42093 shutting down...
I20250114 20:52:58.304654   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 3 FOLLOWER]: Raft consensus shutting down.
I20250114 20:52:58.304888   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:52:58.304978   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P b1e7d89ed71e45f1a2c335e8cbb921cd: stopping tablet replica
I20250114 20:52:58.318279   416 master.cc:559] Master@127.0.104.61:42093 shutdown complete.
[       OK ] TokenSignerITest.TskMasterLeadershipChange (3931 ms)
[ RUN      ] TokenSignerITest.AuthnTokenLifecycle
I20250114 20:52:58.329139   416 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.0.104.62:43479,127.0.104.61:37315,127.0.104.60:42803
I20250114 20:52:58.329778   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:58.333027  2357 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.333894  2358 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.334702  2360 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.335474   416 server_base.cc:1034] running on GCE node
I20250114 20:52:58.335737   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:58.335821   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:58.335875   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887978335859 us; error 0 us; skew 500 ppm
I20250114 20:52:58.336051   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:58.342504   416 webserver.cc:458] Webserver started at http://127.0.104.62:35619/ using document root <none> and password file <none>
I20250114 20:52:58.342797   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:58.342880   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:58.343031   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:58.343696   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-0-root/instance:
uuid: "d2acdf98bf4348ceae84469cca505757"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.346194   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.000s	sys 0.003s
I20250114 20:52:58.347867  2365 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.348209   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.000s	sys 0.002s
I20250114 20:52:58.348369   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-0-root
uuid: "d2acdf98bf4348ceae84469cca505757"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.348490   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:58.368690   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:58.369293   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:58.381565   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.62:43479
I20250114 20:52:58.381584  2416 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.62:43479 every 8 connection(s)
I20250114 20:52:58.382889   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:58.382867  2417 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:58.386077  2417 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:58.386227  2419 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.386850  2420 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.387755  2422 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.388571   416 server_base.cc:1034] running on GCE node
I20250114 20:52:58.388883   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:58.388990   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:58.389055   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887978389032 us; error 0 us; skew 500 ppm
I20250114 20:52:58.389274   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:58.390686   416 webserver.cc:458] Webserver started at http://127.0.104.61:40993/ using document root <none> and password file <none>
I20250114 20:52:58.390923   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:58.390997   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:58.391155   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:58.391928   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-1-root/instance:
uuid: "00edfb9f78da4d938f3a9e56db49b2b3"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.393709  2417 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:58.395038   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.003s	sys 0.001s
I20250114 20:52:58.397264  2430 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
W20250114 20:52:58.397286  2417 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.61:37315: Network error: Client connection negotiation failed: client connection to 127.0.104.61:37315: connect: Connection refused (error 111)
I20250114 20:52:58.397650   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:58.397774   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-1-root
uuid: "00edfb9f78da4d938f3a9e56db49b2b3"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.397898   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:58.409240   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:58.409849   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:58.421144   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.61:37315
I20250114 20:52:58.421175  2481 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.61:37315 every 8 connection(s)
I20250114 20:52:58.422544   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:58.422547  2482 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:58.426445  2482 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:58.426919  2487 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.427548  2485 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.427759  2484 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.428246   416 server_base.cc:1034] running on GCE node
I20250114 20:52:58.428562   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:58.428669   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:58.428740   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887978428713 us; error 0 us; skew 500 ppm
I20250114 20:52:58.428967   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:58.430394   416 webserver.cc:458] Webserver started at http://127.0.104.60:34447/ using document root <none> and password file <none>
I20250114 20:52:58.430649   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:58.430737   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:58.430891   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:58.431555   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-2-root/instance:
uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.432566  2417 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } attempt: 1
I20250114 20:52:58.433917  2482 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:58.434602   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.004s	sys 0.000s
I20250114 20:52:58.437829  2495 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.438660   416 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:52:58.438820   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-2-root
uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.438964   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:58.439874  2417 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:58.441741  2417 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:42803: Network error: Client connection negotiation failed: client connection to 127.0.104.60:42803: connect: Connection refused (error 111)
I20250114 20:52:58.442878  2482 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:52:58.444669  2482 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:42803: Network error: Client connection negotiation failed: client connection to 127.0.104.60:42803: connect: Connection refused (error 111)
I20250114 20:52:58.463925   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:58.464638   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:58.475030  2417 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } attempt: 1
W20250114 20:52:58.477097  2417 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.0.104.60:42803: Network error: Client connection negotiation failed: client connection to 127.0.104.60:42803: connect: Connection refused (error 111)
I20250114 20:52:58.477450   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.60:42803
I20250114 20:52:58.477496  2547 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.60:42803 every 8 connection(s)
I20250114 20:52:58.478312   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:52:58.478798  2548 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:52:58.481088  2548 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:58.487649  2548 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:58.492758  2548 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:52:58.500671  2548 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: Bootstrap starting.
I20250114 20:52:58.502095  2548 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:58.503862  2548 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: No bootstrap required, opened a new log
I20250114 20:52:58.503867  2482 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } attempt: 1
I20250114 20:52:58.504817  2548 raft_consensus.cc:357] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.505019  2548 raft_consensus.cc:383] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:58.505120  2548 raft_consensus.cc:738] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5edb2b39127c4d9d8c8cf1ee4c1a230b, State: Initialized, Role: FOLLOWER
I20250114 20:52:58.505396  2548 consensus_queue.cc:260] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.506124  2553 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.506870  2548 sys_catalog.cc:564] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:58.506871  2553 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: This master's current role is: FOLLOWER
W20250114 20:52:58.512288  2564 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:58.512404  2564 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:58.512415  2482 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: Bootstrap starting.
I20250114 20:52:58.514276  2482 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:58.515326  2417 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } attempt: 2
I20250114 20:52:58.516049  2482 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: No bootstrap required, opened a new log
I20250114 20:52:58.516950  2482 raft_consensus.cc:357] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.517206  2482 raft_consensus.cc:383] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:58.517294  2482 raft_consensus.cc:738] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 00edfb9f78da4d938f3a9e56db49b2b3, State: Initialized, Role: FOLLOWER
I20250114 20:52:58.517545  2482 consensus_queue.cc:260] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.518157  2566 sys_catalog.cc:455] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.518400  2566 sys_catalog.cc:458] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:58.518512  2482 sys_catalog.cc:564] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:58.524188  2417 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: Bootstrap starting.
W20250114 20:52:58.524667  2577 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:58.524770  2577 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:52:58.525681  2417 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: Neither blocks nor log segments found. Creating new log.
I20250114 20:52:58.527231  2417 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: No bootstrap required, opened a new log
I20250114 20:52:58.527834  2417 raft_consensus.cc:357] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.528004  2417 raft_consensus.cc:383] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:52:58.528066  2417 raft_consensus.cc:738] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2acdf98bf4348ceae84469cca505757, State: Initialized, Role: FOLLOWER
I20250114 20:52:58.528338  2417 consensus_queue.cc:260] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.528895  2579 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.529085  2417 sys_catalog.cc:564] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:52:58.529140  2579 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:58.533226   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
I20250114 20:52:58.533313   416 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:52:58.534049   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:52:58.534237  2590 catalog_manager.cc:1559] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:52:58.534332  2590 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:52:58.537381  2591 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.537937  2592 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.538842  2594 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.539495   416 server_base.cc:1034] running on GCE node
I20250114 20:52:58.539743   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:58.539808   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:58.539880   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887978539846 us; error 0 us; skew 500 ppm
I20250114 20:52:58.540024   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:58.541344   416 webserver.cc:458] Webserver started at http://127.0.104.1:34083/ using document root <none> and password file <none>
I20250114 20:52:58.541541   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:58.541620   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:58.541728   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:58.542336   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-0-root/instance:
uuid: "14f9cad48e024973bbeaccf23eb07806"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.544657   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.000s	sys 0.003s
I20250114 20:52:58.546211  2599 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.546581   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.002s	sys 0.000s
I20250114 20:52:58.546694   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-0-root
uuid: "14f9cad48e024973bbeaccf23eb07806"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.546824   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:58.572214   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:58.572849   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:58.573439   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:58.574314   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:58.574397   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.574505   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:58.574556   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.588626   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.1:38327
I20250114 20:52:58.588655  2661 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.1:38327 every 8 connection(s)
I20250114 20:52:58.591876   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:58.599879  2662 heartbeater.cc:346] Connected to a master server at 127.0.104.60:42803
I20250114 20:52:58.600154  2662 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.600163  2663 heartbeater.cc:346] Connected to a master server at 127.0.104.62:43479
I20250114 20:52:58.600425  2663 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.600899  2662 heartbeater.cc:510] Master 127.0.104.60:42803 requested a full tablet report, sending...
I20250114 20:52:58.601004  2663 heartbeater.cc:510] Master 127.0.104.62:43479 requested a full tablet report, sending...
I20250114 20:52:58.602296  2382 ts_manager.cc:194] Registered new tserver with Master: 14f9cad48e024973bbeaccf23eb07806 (127.0.104.1:38327)
W20250114 20:52:58.602669  2669 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.602267  2513 ts_manager.cc:194] Registered new tserver with Master: 14f9cad48e024973bbeaccf23eb07806 (127.0.104.1:38327)
I20250114 20:52:58.605468  2664 heartbeater.cc:346] Connected to a master server at 127.0.104.61:37315
W20250114 20:52:58.605576  2670 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.605619  2664 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.606022  2664 heartbeater.cc:510] Master 127.0.104.61:37315 requested a full tablet report, sending...
I20250114 20:52:58.607132   416 server_base.cc:1034] running on GCE node
I20250114 20:52:58.607234  2447 ts_manager.cc:194] Registered new tserver with Master: 14f9cad48e024973bbeaccf23eb07806 (127.0.104.1:38327)
W20250114 20:52:58.608023  2672 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.608446   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:58.608516   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:58.608573   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887978608555 us; error 0 us; skew 500 ppm
I20250114 20:52:58.608736   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:58.609862  2566 raft_consensus.cc:491] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:52:58.610040  2566 raft_consensus.cc:513] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.610174   416 webserver.cc:458] Webserver started at http://127.0.104.2:44611/ using document root <none> and password file <none>
I20250114 20:52:58.610473   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:58.610574   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:58.610765   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:58.610806  2566 leader_election.cc:290] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d2acdf98bf4348ceae84469cca505757 (127.0.104.62:43479), 5edb2b39127c4d9d8c8cf1ee4c1a230b (127.0.104.60:42803)
I20250114 20:52:58.611167  2523 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" is_pre_election: true
I20250114 20:52:58.611194  2392 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2acdf98bf4348ceae84469cca505757" is_pre_election: true
I20250114 20:52:58.611353  2523 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 00edfb9f78da4d938f3a9e56db49b2b3 in term 0.
I20250114 20:52:58.611414  2392 raft_consensus.cc:2463] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 00edfb9f78da4d938f3a9e56db49b2b3 in term 0.
I20250114 20:52:58.611598   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-1-root/instance:
uuid: "4a1a66d6c7b84a249f39b7440eb0e006"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.611824  2432 leader_election.cc:304] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 3 yes votes; 0 no votes. yes voters: 00edfb9f78da4d938f3a9e56db49b2b3, 5edb2b39127c4d9d8c8cf1ee4c1a230b, d2acdf98bf4348ceae84469cca505757; no voters: 
I20250114 20:52:58.612115  2566 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:52:58.612236  2566 raft_consensus.cc:491] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:52:58.612361  2566 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:58.615025  2566 raft_consensus.cc:513] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.615347   416 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.000s	sys 0.004s
I20250114 20:52:58.615900  2566 leader_election.cc:290] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [CANDIDATE]: Term 1 election: Requested vote from peers d2acdf98bf4348ceae84469cca505757 (127.0.104.62:43479), 5edb2b39127c4d9d8c8cf1ee4c1a230b (127.0.104.60:42803)
I20250114 20:52:58.616123  2392 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2acdf98bf4348ceae84469cca505757"
I20250114 20:52:58.616137  2523 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b"
I20250114 20:52:58.616359  2392 raft_consensus.cc:3054] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:58.616436  2523 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:52:58.617720  2677 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.618176   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:58.618321   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-1-root
uuid: "4a1a66d6c7b84a249f39b7440eb0e006"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.618450   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:58.618953  2392 raft_consensus.cc:2463] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 00edfb9f78da4d938f3a9e56db49b2b3 in term 1.
I20250114 20:52:58.619062  2523 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 00edfb9f78da4d938f3a9e56db49b2b3 in term 1.
I20250114 20:52:58.619381  2431 leader_election.cc:304] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 00edfb9f78da4d938f3a9e56db49b2b3, d2acdf98bf4348ceae84469cca505757; no voters: 
I20250114 20:52:58.619649  2566 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:52:58.620164  2566 raft_consensus.cc:695] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [term 1 LEADER]: Becoming Leader. State: Replica: 00edfb9f78da4d938f3a9e56db49b2b3, State: Running, Role: LEADER
I20250114 20:52:58.620518  2566 consensus_queue.cc:237] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } }
I20250114 20:52:58.622016  2679 sys_catalog.cc:455] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 00edfb9f78da4d938f3a9e56db49b2b3. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.622278  2679 sys_catalog.cc:458] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:58.624748  2684 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:52:58.627033  2684 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:52:58.630400  2392 raft_consensus.cc:1270] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 1 FOLLOWER]: Refusing update from remote peer 00edfb9f78da4d938f3a9e56db49b2b3: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:58.630424  2523 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [term 1 FOLLOWER]: Refusing update from remote peer 00edfb9f78da4d938f3a9e56db49b2b3: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:52:58.631009  2566 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:58.631289  2679 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:52:58.631493   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:58.632249   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:58.633724   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:58.634897  2579 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 00edfb9f78da4d938f3a9e56db49b2b3. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.635180  2579 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:58.635177  2553 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: SysCatalogTable state changed. Reason: New leader 00edfb9f78da4d938f3a9e56db49b2b3. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.635569  2553 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:58.636595   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:58.636699   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.636797   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:58.636876   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.638013  2679 sys_catalog.cc:455] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.638270  2679 sys_catalog.cc:458] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:58.638454  2679 sys_catalog.cc:455] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.638653  2679 sys_catalog.cc:458] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 [sys.catalog]: This master's current role is: LEADER
I20250114 20:52:58.639981  2579 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.640187  2579 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:58.640960  2553 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2acdf98bf4348ceae84469cca505757" member_type: VOTER last_known_addr { host: "127.0.104.62" port: 43479 } } peers { permanent_uuid: "00edfb9f78da4d938f3a9e56db49b2b3" member_type: VOTER last_known_addr { host: "127.0.104.61" port: 37315 } } peers { permanent_uuid: "5edb2b39127c4d9d8c8cf1ee4c1a230b" member_type: VOTER last_known_addr { host: "127.0.104.60" port: 42803 } } }
I20250114 20:52:58.641183  2553 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:52:58.641481  2688 mvcc.cc:204] Tried to move back new op lower bound from 7114293160464670720 to 7114293160432902144. Current Snapshot: MvccSnapshot[applied={T|T < 7114293160464670720 or (T in {7114293160464670720})}]
I20250114 20:52:58.641705  2684 catalog_manager.cc:1348] Generated new cluster ID: aa29be4137b143bc91da45764678cff6
I20250114 20:52:58.641795  2684 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:52:58.653867  2684 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:52:58.655009  2684 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:52:58.666249   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.2:35177
I20250114 20:52:58.666270  2752 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.2:35177 every 8 connection(s)
I20250114 20:52:58.668547  2684 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: Generated new TSK 0
I20250114 20:52:58.668840  2684 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:52:58.685091  2754 heartbeater.cc:346] Connected to a master server at 127.0.104.62:43479
I20250114 20:52:58.685827  2754 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.686175   416 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:52:58.686297  2756 heartbeater.cc:346] Connected to a master server at 127.0.104.61:37315
I20250114 20:52:58.686357  2753 heartbeater.cc:346] Connected to a master server at 127.0.104.60:42803
I20250114 20:52:58.686457  2756 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.686488  2754 heartbeater.cc:510] Master 127.0.104.62:43479 requested a full tablet report, sending...
I20250114 20:52:58.686492  2753 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.687059  2756 heartbeater.cc:510] Master 127.0.104.61:37315 requested a full tablet report, sending...
I20250114 20:52:58.687115  2753 heartbeater.cc:510] Master 127.0.104.60:42803 requested a full tablet report, sending...
I20250114 20:52:58.687639  2382 ts_manager.cc:194] Registered new tserver with Master: 4a1a66d6c7b84a249f39b7440eb0e006 (127.0.104.2:35177)
I20250114 20:52:58.687928  2447 ts_manager.cc:194] Registered new tserver with Master: 4a1a66d6c7b84a249f39b7440eb0e006 (127.0.104.2:35177)
I20250114 20:52:58.689185  2513 ts_manager.cc:194] Registered new tserver with Master: 4a1a66d6c7b84a249f39b7440eb0e006 (127.0.104.2:35177)
I20250114 20:52:58.689282  2447 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51800
W20250114 20:52:58.692445  2760 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.692968  2761 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:52:58.693955  2763 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:52:58.694902   416 server_base.cc:1034] running on GCE node
I20250114 20:52:58.695183   416 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:52:58.695254   416 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:52:58.695328   416 hybrid_clock.cc:648] HybridClock initialized: now 1736887978695293 us; error 0 us; skew 500 ppm
I20250114 20:52:58.695504   416 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:52:58.696889   416 webserver.cc:458] Webserver started at http://127.0.104.3:46143/ using document root <none> and password file <none>
I20250114 20:52:58.697104   416 fs_manager.cc:362] Metadata directory not provided
I20250114 20:52:58.697209   416 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:52:58.697336   416 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:52:58.697983   416 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-2-root/instance:
uuid: "883041cbe2624177bf8c0969c4c76271"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.700529   416 fs_manager.cc:696] Time spent creating directory manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:52:58.702101  2768 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.702487   416 fs_manager.cc:730] Time spent opening block manager: real 0.001s	user 0.001s	sys 0.000s
I20250114 20:52:58.702585   416 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-2-root
uuid: "883041cbe2624177bf8c0969c4c76271"
format_stamp: "Formatted at 2025-01-14 20:52:58 on dist-test-slave-npjh"
I20250114 20:52:58.702698   416 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskS5lTji/test-tmp/token_signer-itest.0.TokenSignerITest.AuthnTokenLifecycle.1736887971271855-416-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:52:58.714618   416 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:52:58.715222   416 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:52:58.716408   416 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:52:58.717314   416 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:52:58.717389   416 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.717474   416 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:52:58.717517   416 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:52:58.733835   416 rpc_server.cc:307] RPC server started. Bound to: 127.0.104.3:39077
I20250114 20:52:58.733870  2830 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.104.3:39077 every 8 connection(s)
I20250114 20:52:58.742509  2831 heartbeater.cc:346] Connected to a master server at 127.0.104.60:42803
I20250114 20:52:58.742686  2831 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.743289  2831 heartbeater.cc:510] Master 127.0.104.60:42803 requested a full tablet report, sending...
I20250114 20:52:58.744711  2513 ts_manager.cc:194] Registered new tserver with Master: 883041cbe2624177bf8c0969c4c76271 (127.0.104.3:39077)
I20250114 20:52:58.745421  2833 heartbeater.cc:346] Connected to a master server at 127.0.104.61:37315
I20250114 20:52:58.745554  2833 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.746016  2833 heartbeater.cc:510] Master 127.0.104.61:37315 requested a full tablet report, sending...
I20250114 20:52:58.747011  2447 ts_manager.cc:194] Registered new tserver with Master: 883041cbe2624177bf8c0969c4c76271 (127.0.104.3:39077)
I20250114 20:52:58.748066  2447 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51802
I20250114 20:52:58.750810  2832 heartbeater.cc:346] Connected to a master server at 127.0.104.62:43479
I20250114 20:52:58.750928  2832 heartbeater.cc:463] Registering TS with master...
I20250114 20:52:58.751189  2832 heartbeater.cc:510] Master 127.0.104.62:43479 requested a full tablet report, sending...
I20250114 20:52:58.752018  2382 ts_manager.cc:194] Registered new tserver with Master: 883041cbe2624177bf8c0969c4c76271 (127.0.104.3:39077)
I20250114 20:52:58.752625   416 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.017299618s
I20250114 20:52:59.513880  2564 catalog_manager.cc:1260] Loaded cluster ID: aa29be4137b143bc91da45764678cff6
I20250114 20:52:59.514026  2564 catalog_manager.cc:1553] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: loading cluster ID for follower catalog manager: success
I20250114 20:52:59.515974  2564 catalog_manager.cc:1575] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: acquiring CA information for follower catalog manager: success
I20250114 20:52:59.517078  2564 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:52:59.535689  2590 catalog_manager.cc:1260] Loaded cluster ID: aa29be4137b143bc91da45764678cff6
I20250114 20:52:59.535802  2590 catalog_manager.cc:1553] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: loading cluster ID for follower catalog manager: success
I20250114 20:52:59.537664  2590 catalog_manager.cc:1575] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: acquiring CA information for follower catalog manager: success
I20250114 20:52:59.538605  2590 catalog_manager.cc:1603] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:52:59.609542  2447 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51794
I20250114 20:52:59.690423  2756 heartbeater.cc:502] Master 127.0.104.61:37315 was elected leader, sending a full tablet report...
I20250114 20:52:59.749897  2833 heartbeater.cc:502] Master 127.0.104.61:37315 was elected leader, sending a full tablet report...
I20250114 20:53:00.610952  2664 heartbeater.cc:502] Master 127.0.104.61:37315 was elected leader, sending a full tablet report...
I20250114 20:53:09.522830  2564 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:53:09.543884  2590 catalog_manager.cc:1603] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:53:18.544965  2577 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: Generated new TSK 1
I20250114 20:53:19.528628  2564 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 1
I20250114 20:53:19.549392  2590 catalog_manager.cc:1603] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 1
I20250114 20:53:29.534488  2564 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 1
I20250114 20:53:29.554824  2590 catalog_manager.cc:1603] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 1
I20250114 20:53:39.540377  2564 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 1
I20250114 20:53:39.560616  2590 catalog_manager.cc:1603] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 1
I20250114 20:53:39.568068  2577 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3: Generated new TSK 2
I20250114 20:53:49.545874  2564 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 5edb2b39127c4d9d8c8cf1ee4c1a230b: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 2
I20250114 20:53:49.566349  2590 catalog_manager.cc:1603] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 2
I20250114 20:53:58.310170   416 tablet_server.cc:178] TabletServer@127.0.104.1:0 shutting down...
I20250114 20:53:58.320868   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:53:58.333868   416 tablet_server.cc:195] TabletServer@127.0.104.1:0 shutdown complete.
I20250114 20:53:58.336712   416 tablet_server.cc:178] TabletServer@127.0.104.2:0 shutting down...
I20250114 20:53:58.344915   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:53:58.359040   416 tablet_server.cc:195] TabletServer@127.0.104.2:0 shutdown complete.
I20250114 20:53:58.362113   416 tablet_server.cc:178] TabletServer@127.0.104.3:0 shutting down...
I20250114 20:53:58.371004   416 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:53:58.384207   416 tablet_server.cc:195] TabletServer@127.0.104.3:0 shutdown complete.
I20250114 20:53:58.387336   416 master.cc:537] Master@127.0.104.62:43479 shutting down...
W20250114 20:53:58.390317  2431 proxy.cc:239] Call had error, refreshing address and retrying: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on Master [suppressed 5 similar messages]
W20250114 20:53:58.391726  2431 consensus_peers.cc:487] T 00000000000000000000000000000000 P 00edfb9f78da4d938f3a9e56db49b2b3 -> Peer d2acdf98bf4348ceae84469cca505757 (127.0.104.62:43479): Couldn't send request to peer d2acdf98bf4348ceae84469cca505757. Status: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on Master. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:53:58.396757   416 raft_consensus.cc:2238] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:53:58.397042   416 raft_consensus.cc:2267] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:53:58.397200   416 tablet_replica.cc:331] T 00000000000000000000000000000000 P d2acdf98bf4348ceae84469cca505757: stopping tablet replica
F20250114 20:53:58.403462  2413 diagnostic_socket.cc:234] Check failed: fd_ >= 0 (-1 vs. 0) 
*** Check failure stack trace: ***
*** Aborted at 1736888038 (unix time) try "date -d @1736888038" if you are using GNU date ***
PC: @                0x0 (unknown)
*** SIGABRT (@0x3e8000001a0) received by PID 416 (TID 0x7fa835adc700) from PID 416; stack trace: ***
    @     0x7fa85c656980 (unknown) at ??:0
    @     0x7fa85566dfb7 gsignal at ??:0
    @     0x7fa85566f921 abort at ??:0
    @     0x7fa858396dcd google::LogMessage::Fail() at ??:0
    @     0x7fa85839ab93 google::LogMessage::SendToLog() at ??:0
    @     0x7fa8583967cc google::LogMessage::Flush() at ??:0
    @     0x7fa858397f59 google::LogMessageFatal::~LogMessageFatal() at ??:0
    @     0x7fa859a81d4f kudu::DiagnosticSocket::ReceiveResponse() at ??:0
    @     0x7fa859a822ae kudu::DiagnosticSocket::Query() at ??:0
    @     0x7fa860115cea kudu::rpc::AcceptorPool::GetPendingConnectionsNum() at ??:0
    @     0x7fa8601847f4 kudu::rpc::Messenger::GetPendingConnectionsNum() at ??:0
    @     0x7fa86018a24c kudu::rpc::Messenger::AddAcceptorPool()::$_1::operator()() at ??:0
    @     0x7fa86018a0c9 std::_Function_handler<>::_M_invoke() at ??:0
    @     0x7fa86d203cad std::function<>::operator()() at ??:0
    @     0x7fa86d203888 kudu::FunctionGauge<>::value() at ??:0
    @     0x7fa86d202ee9 kudu::FunctionGauge<>::WriteValue() at ??:0
    @     0x7fa85990f32f kudu::Gauge::WriteAsJson() at ??:0
    @     0x7fa85991ac4b kudu::WriteMetricsToJson<>() at ??:0
    @     0x7fa859908680 kudu::MetricEntity::WriteAsJson() at ??:0
    @     0x7fa85990b81e kudu::MetricRegistry::WriteAsJson() at ??:0
    @     0x7fa862428127 kudu::server::DiagnosticsLog::LogMetrics() at ??:0
    @     0x7fa862427784 kudu::server::DiagnosticsLog::RunThread() at ??:0
    @     0x7fa86242978c kudu::server::DiagnosticsLog::Start()::$_0::operator()() at ??:0
    @     0x7fa862429609 std::_Function_handler<>::_M_invoke() at ??:0
    @     0x7fa86e58936d std::function<>::operator()() at ??:0
    @     0x7fa859a0e9be kudu::Thread::SuperviseThread() at ??:0
    @     0x7fa85c64b6db start_thread at ??:0
    @     0x7fa85575071f clone at ??:0