Diagnosed failure

ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings/3: /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/predicate-test.cc:1563: Failure
Failed
Bad status: IO error: failed to flush data: error details are available via KuduSession::GetPendingErrors()
I20260321 14:57:41.104857 28339 tablet.cc:1621] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: MemRowSet was empty: no flush needed.
W20260321 14:57:41.116475 29241 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=17, attempt_no=0}) took 10879 ms (client timeout 9999 ms). Trace:
W20260321 14:57:41.117273 29241 rpcz_store.cc:269] 0321 14:57:30.237410 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:57:30.237992 (+   582us) service_pool.cc:225] Handling call
0321 14:57:41.116108 (+10878116us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:57:30.241577 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:57:30.241933 (+   356us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:57:30.241961 (+    28us) write_op.cc:435] Acquired schema lock
0321 14:57:30.241969 (+     8us) tablet.cc:661] Decoding operations
0321 14:57:30.275079 (+ 33110us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:57:30.275196 (+   117us) write_op.cc:641] Partition lock acquired for write op
0321 14:57:30.275221 (+    25us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:57:30.310038 (+ 34817us) tablet.cc:700] Row locks acquired
0321 14:57:30.310065 (+    27us) write_op.cc:260] PREPARE: finished
0321 14:57:30.310319 (+   254us) write_op.cc:270] Start()
0321 14:57:30.310463 (+   144us) write_op.cc:276] Timestamp: P: 1774105050310253 usec, L: 0
0321 14:57:30.310487 (+    24us) op_driver.cc:348] REPLICATION: starting
0321 14:57:30.311618 (+  1131us) log.cc:844] Serialized 32439 byte log entry
0321 14:57:30.319801 (+  8183us) op_driver.cc:464] REPLICATION: finished
0321 14:57:30.321107 (+  1306us) write_op.cc:301] APPLY: starting
0321 14:57:30.321189 (+    82us) tablet.cc:1366] starting BulkCheckPresence
0321 14:57:35.712590 (+5391401us) tablet.cc:1369] finished BulkCheckPresence
0321 14:57:35.712676 (+    86us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:57:41.094063 (+5381387us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:57:41.097088 (+  3025us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:57:41.097157 (+    69us) write_op.cc:312] APPLY: finished
0321 14:57:41.103130 (+  5973us) log.cc:844] Serialized 8083 byte log entry
0321 14:57:41.104585 (+  1455us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:57:41.114481 (+  9896us) write_op.cc:454] Released schema lock
0321 14:57:41.115317 (+   836us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":1026,"cfile_cache_hit":4034,"cfile_cache_hit_bytes":5392982,"num_ops":1008,"prepare.queue_time_us":1069,"prepare.run_cpu_time_us":70483,"prepare.run_wall_time_us":70508,"replication_time_us":9122,"spinlock_wait_cycles":22784,"thread_start_us":1928,"threads_started":3,"wal-append.queue_time_us":1196}]]}
I20260321 14:57:41.443348 28339 tablet_server.cc:179] TabletServer@127.27.172.193:0 shutting down...
I20260321 14:57:41.513790 28339 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20260321 14:57:41.514707 28339 tablet_replica.cc:333] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: stopping tablet replica
I20260321 14:57:41.520393 28339 raft_consensus.cc:2243] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:57:41.522794 28339 raft_consensus.cc:2272] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:57:41.618646 28339 tablet_server.cc:196] TabletServer@127.27.172.193:0 shutdown complete.
I20260321 14:57:41.655553 28339 master.cc:562] Master@127.27.172.254:35293 shutting down...
I20260321 14:57:41.709046 28339 raft_consensus.cc:2243] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:57:41.709733 28339 raft_consensus.cc:2272] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:57:41.710029 28339 tablet_replica.cc:333] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e: stopping tablet replica
I20260321 14:57:41.799431 28339 master.cc:584] Master@127.27.172.254:35293 shutdown complete.
I20260321 14:57:41.879897 28339 test_util.cc:182] -----------------------------------------------
I20260321 14:57:41.880146 28339 test_util.cc:183] Had failures, leaving test files at /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0

Full log

Note: This is test shard 2 of 4.
[==========] Running 5 tests from 3 test suites.
[----------] Global test environment set-up.
[----------] 3 tests from PredicateTest
[ RUN      ] PredicateTest.TestInt8Predicates
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20260321 14:56:08.484696 28339 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.27.172.254:37599
I20260321 14:56:08.486441 28339 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260321 14:56:08.487612 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:08.506805 28345 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:08.506920 28348 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:08.508056 28339 server_base.cc:1061] running on GCE node
W20260321 14:56:08.506963 28346 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:09.794178 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:09.794446 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:09.794601 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104969794586 us; error 0 us; skew 500 ppm
I20260321 14:56:09.795459 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:09.805796 28339 webserver.cc:492] Webserver started at http://127.27.172.254:37513/ using document root <none> and password file <none>
I20260321 14:56:09.806787 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:09.807001 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:09.807446 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:09.813820 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/master-0-root/instance:
uuid: "458706fd8c19421eb4bc8ffe9549af2a"
format_stamp: "Formatted at 2026-03-21 14:56:09 on dist-test-slave-blhp"
I20260321 14:56:09.822414 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.008s	user 0.002s	sys 0.008s
I20260321 14:56:09.828923 28354 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:09.830500 28339 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.002s	sys 0.002s
I20260321 14:56:09.830940 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/master-0-root
uuid: "458706fd8c19421eb4bc8ffe9549af2a"
format_stamp: "Formatted at 2026-03-21 14:56:09 on dist-test-slave-blhp"
I20260321 14:56:09.831293 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:09.909353 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:09.911368 28339 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260321 14:56:09.912024 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:10.010118 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.254:37599
I20260321 14:56:10.010229 28405 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.254:37599 every 8 connection(s)
I20260321 14:56:10.019199 28406 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:10.048213 28406 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a: Bootstrap starting.
I20260321 14:56:10.056238 28406 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:10.058612 28406 log.cc:826] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a: Log is configured to *not* fsync() on all Append() calls
I20260321 14:56:10.064177 28406 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a: No bootstrap required, opened a new log
I20260321 14:56:10.090637 28406 raft_consensus.cc:359] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "458706fd8c19421eb4bc8ffe9549af2a" member_type: VOTER }
I20260321 14:56:10.091758 28406 raft_consensus.cc:385] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:10.092038 28406 raft_consensus.cc:740] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 458706fd8c19421eb4bc8ffe9549af2a, State: Initialized, Role: FOLLOWER
I20260321 14:56:10.092972 28406 consensus_queue.cc:260] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "458706fd8c19421eb4bc8ffe9549af2a" member_type: VOTER }
I20260321 14:56:10.093618 28406 raft_consensus.cc:399] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:10.093881 28406 raft_consensus.cc:493] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:10.094209 28406 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:10.098985 28406 raft_consensus.cc:515] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "458706fd8c19421eb4bc8ffe9549af2a" member_type: VOTER }
I20260321 14:56:10.099833 28406 leader_election.cc:304] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 458706fd8c19421eb4bc8ffe9549af2a; no voters: 
I20260321 14:56:10.101747 28406 leader_election.cc:290] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:10.102188 28409 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:10.105355 28409 raft_consensus.cc:697] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 1 LEADER]: Becoming Leader. State: Replica: 458706fd8c19421eb4bc8ffe9549af2a, State: Running, Role: LEADER
I20260321 14:56:10.106554 28409 consensus_queue.cc:237] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "458706fd8c19421eb4bc8ffe9549af2a" member_type: VOTER }
I20260321 14:56:10.108160 28406 sys_catalog.cc:565] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [sys.catalog]: configured and running, proceeding with master startup.
I20260321 14:56:10.127866 28410 sys_catalog.cc:455] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "458706fd8c19421eb4bc8ffe9549af2a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "458706fd8c19421eb4bc8ffe9549af2a" member_type: VOTER } }
I20260321 14:56:10.128743 28410 sys_catalog.cc:458] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:10.129336 28411 sys_catalog.cc:455] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [sys.catalog]: SysCatalogTable state changed. Reason: New leader 458706fd8c19421eb4bc8ffe9549af2a. Latest consensus state: current_term: 1 leader_uuid: "458706fd8c19421eb4bc8ffe9549af2a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "458706fd8c19421eb4bc8ffe9549af2a" member_type: VOTER } }
I20260321 14:56:10.130115 28411 sys_catalog.cc:458] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:10.134181 28416 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260321 14:56:10.149878 28416 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260321 14:56:10.153151 28339 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20260321 14:56:10.173131 28416 catalog_manager.cc:1357] Generated new cluster ID: 53e7712703914cd6ae404f19cba1dd92
I20260321 14:56:10.173499 28416 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260321 14:56:10.190639 28416 catalog_manager.cc:1380] Generated new certificate authority record
I20260321 14:56:10.194144 28416 catalog_manager.cc:1514] Loading token signing keys...
I20260321 14:56:10.208434 28416 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a: Generated new TSK 0
I20260321 14:56:10.210121 28416 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260321 14:56:10.221524 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:10.231470 28428 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:10.235669 28429 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:10.240697 28431 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:10.241508 28339 server_base.cc:1061] running on GCE node
I20260321 14:56:10.243379 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:10.243687 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:10.243932 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104970243906 us; error 0 us; skew 500 ppm
I20260321 14:56:10.244616 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:10.253839 28339 webserver.cc:492] Webserver started at http://127.27.172.193:40751/ using document root <none> and password file <none>
I20260321 14:56:10.254807 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:10.255089 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:10.256122 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:10.257707 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/ts-0-root/instance:
uuid: "1b2bd3d0959f4a06880e93fefa2c6261"
format_stamp: "Formatted at 2026-03-21 14:56:10 on dist-test-slave-blhp"
I20260321 14:56:10.264544 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.001s
I20260321 14:56:10.272251 28436 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:10.273324 28339 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.001s
I20260321 14:56:10.273625 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/ts-0-root
uuid: "1b2bd3d0959f4a06880e93fefa2c6261"
format_stamp: "Formatted at 2026-03-21 14:56:10 on dist-test-slave-blhp"
I20260321 14:56:10.273910 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestInt8Predicates.1774104968283944-28339-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:10.297654 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:10.299196 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:10.302174 28339 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260321 14:56:10.308339 28339 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260321 14:56:10.308673 28339 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:10.309122 28339 ts_tablet_manager.cc:616] Registered 0 tablets
I20260321 14:56:10.309346 28339 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:10.438162 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.193:33161
I20260321 14:56:10.438373 28498 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.193:33161 every 8 connection(s)
I20260321 14:56:10.476894 28499 heartbeater.cc:344] Connected to a master server at 127.27.172.254:37599
I20260321 14:56:10.477412 28499 heartbeater.cc:461] Registering TS with master...
I20260321 14:56:10.478489 28499 heartbeater.cc:507] Master 127.27.172.254:37599 requested a full tablet report, sending...
I20260321 14:56:10.482518 28371 ts_manager.cc:194] Registered new tserver with Master: 1b2bd3d0959f4a06880e93fefa2c6261 (127.27.172.193:33161)
I20260321 14:56:10.482800 28339 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.038632528s
I20260321 14:56:10.485570 28371 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:56384
I20260321 14:56:10.526876 28371 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:56396:
name: "table"
schema {
  columns {
    name: "key"
    type: INT64
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "value"
    type: INT8
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20260321 14:56:10.617362 28464 tablet_service.cc:1511] Processing CreateTablet for tablet 3bff16e9759b48d086d15727a0f64901 (DEFAULT_TABLE table=table [id=9c8ceab6e8aa488c96e79e550208781c]), partition=RANGE (key) PARTITION UNBOUNDED
I20260321 14:56:10.618901 28464 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3bff16e9759b48d086d15727a0f64901. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:10.634116 28511 tablet_bootstrap.cc:492] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261: Bootstrap starting.
I20260321 14:56:10.641086 28511 tablet_bootstrap.cc:654] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:10.646955 28511 tablet_bootstrap.cc:492] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261: No bootstrap required, opened a new log
I20260321 14:56:10.647553 28511 ts_tablet_manager.cc:1403] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261: Time spent bootstrapping tablet: real 0.014s	user 0.002s	sys 0.008s
I20260321 14:56:10.650786 28511 raft_consensus.cc:359] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1b2bd3d0959f4a06880e93fefa2c6261" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 33161 } }
I20260321 14:56:10.651494 28511 raft_consensus.cc:385] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:10.651803 28511 raft_consensus.cc:740] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1b2bd3d0959f4a06880e93fefa2c6261, State: Initialized, Role: FOLLOWER
I20260321 14:56:10.652902 28511 consensus_queue.cc:260] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1b2bd3d0959f4a06880e93fefa2c6261" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 33161 } }
I20260321 14:56:10.653642 28511 raft_consensus.cc:399] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:10.653967 28511 raft_consensus.cc:493] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:10.654281 28511 raft_consensus.cc:3060] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:10.659703 28511 raft_consensus.cc:515] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1b2bd3d0959f4a06880e93fefa2c6261" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 33161 } }
I20260321 14:56:10.660399 28511 leader_election.cc:304] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 1b2bd3d0959f4a06880e93fefa2c6261; no voters: 
I20260321 14:56:10.661741 28511 leader_election.cc:290] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:10.662091 28513 raft_consensus.cc:2804] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:10.664943 28513 raft_consensus.cc:697] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 1 LEADER]: Becoming Leader. State: Replica: 1b2bd3d0959f4a06880e93fefa2c6261, State: Running, Role: LEADER
I20260321 14:56:10.666030 28513 consensus_queue.cc:237] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1b2bd3d0959f4a06880e93fefa2c6261" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 33161 } }
I20260321 14:56:10.667135 28499 heartbeater.cc:499] Master 127.27.172.254:37599 was elected leader, sending a full tablet report...
I20260321 14:56:10.667747 28511 ts_tablet_manager.cc:1434] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261: Time spent starting tablet: real 0.020s	user 0.010s	sys 0.008s
I20260321 14:56:10.683248 28371 catalog_manager.cc:5671] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 reported cstate change: term changed from 0 to 1, leader changed from <none> to 1b2bd3d0959f4a06880e93fefa2c6261 (127.27.172.193). New cstate: current_term: 1 leader_uuid: "1b2bd3d0959f4a06880e93fefa2c6261" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1b2bd3d0959f4a06880e93fefa2c6261" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 33161 } health_report { overall_health: HEALTHY } } }
W20260321 14:56:11.909350 28454 compilation_manager.cc:213] RowProjector compilation request submit failed: Service unavailable: Thread pool is at capacity (1/1 tasks running, 100/100 tasks queued)
I20260321 14:56:12.615342 28339 tablet_server.cc:179] TabletServer@127.27.172.193:0 shutting down...
I20260321 14:56:12.648906 28339 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20260321 14:56:12.649786 28339 tablet_replica.cc:333] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261: stopping tablet replica
I20260321 14:56:12.650556 28339 raft_consensus.cc:2243] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:12.651153 28339 raft_consensus.cc:2272] T 3bff16e9759b48d086d15727a0f64901 P 1b2bd3d0959f4a06880e93fefa2c6261 [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:12.827728 28339 tablet_server.cc:196] TabletServer@127.27.172.193:0 shutdown complete.
I20260321 14:56:12.838156 28339 master.cc:562] Master@127.27.172.254:37599 shutting down...
I20260321 14:56:12.940146 28339 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:12.940901 28339 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:12.941256 28339 tablet_replica.cc:333] T 00000000000000000000000000000000 P 458706fd8c19421eb4bc8ffe9549af2a: stopping tablet replica
I20260321 14:56:12.960191 28339 master.cc:584] Master@127.27.172.254:37599 shutdown complete.
[       OK ] PredicateTest.TestInt8Predicates (4504 ms)
[ RUN      ] PredicateTest.TestTimestampPredicates
I20260321 14:56:12.995478 28339 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.27.172.254:46299
I20260321 14:56:12.997411 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:13.004896 28521 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:13.005244 28520 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:13.005671 28523 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:13.007709 28339 server_base.cc:1061] running on GCE node
I20260321 14:56:13.009168 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:13.009439 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:13.009642 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104973009628 us; error 0 us; skew 500 ppm
I20260321 14:56:13.010265 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:13.013130 28339 webserver.cc:492] Webserver started at http://127.27.172.254:33729/ using document root <none> and password file <none>
I20260321 14:56:13.013636 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:13.013837 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:13.014110 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:13.015151 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/master-0-root/instance:
uuid: "1ea91bb50b4e415d969f7772067fc6b0"
format_stamp: "Formatted at 2026-03-21 14:56:13 on dist-test-slave-blhp"
I20260321 14:56:13.020337 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.007s	sys 0.000s
I20260321 14:56:13.024194 28528 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:13.025288 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.001s
I20260321 14:56:13.025610 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/master-0-root
uuid: "1ea91bb50b4e415d969f7772067fc6b0"
format_stamp: "Formatted at 2026-03-21 14:56:13 on dist-test-slave-blhp"
I20260321 14:56:13.025903 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:13.066722 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:13.067929 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:13.122951 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.254:46299
I20260321 14:56:13.123041 28579 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.254:46299 every 8 connection(s)
I20260321 14:56:13.127449 28580 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:13.138691 28580 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0: Bootstrap starting.
I20260321 14:56:13.142946 28580 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:13.147631 28580 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0: No bootstrap required, opened a new log
I20260321 14:56:13.149590 28580 raft_consensus.cc:359] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1ea91bb50b4e415d969f7772067fc6b0" member_type: VOTER }
I20260321 14:56:13.149968 28580 raft_consensus.cc:385] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:13.150256 28580 raft_consensus.cc:740] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1ea91bb50b4e415d969f7772067fc6b0, State: Initialized, Role: FOLLOWER
I20260321 14:56:13.150839 28580 consensus_queue.cc:260] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1ea91bb50b4e415d969f7772067fc6b0" member_type: VOTER }
I20260321 14:56:13.151548 28580 raft_consensus.cc:399] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:13.151868 28580 raft_consensus.cc:493] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:13.152168 28580 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:13.156879 28580 raft_consensus.cc:515] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1ea91bb50b4e415d969f7772067fc6b0" member_type: VOTER }
I20260321 14:56:13.157495 28580 leader_election.cc:304] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 1ea91bb50b4e415d969f7772067fc6b0; no voters: 
I20260321 14:56:13.158922 28580 leader_election.cc:290] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:13.159216 28583 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:13.161008 28583 raft_consensus.cc:697] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 1 LEADER]: Becoming Leader. State: Replica: 1ea91bb50b4e415d969f7772067fc6b0, State: Running, Role: LEADER
I20260321 14:56:13.161746 28583 consensus_queue.cc:237] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1ea91bb50b4e415d969f7772067fc6b0" member_type: VOTER }
I20260321 14:56:13.162563 28580 sys_catalog.cc:565] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [sys.catalog]: configured and running, proceeding with master startup.
I20260321 14:56:13.164713 28584 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "1ea91bb50b4e415d969f7772067fc6b0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1ea91bb50b4e415d969f7772067fc6b0" member_type: VOTER } }
I20260321 14:56:13.165285 28584 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:13.167008 28585 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 1ea91bb50b4e415d969f7772067fc6b0. Latest consensus state: current_term: 1 leader_uuid: "1ea91bb50b4e415d969f7772067fc6b0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1ea91bb50b4e415d969f7772067fc6b0" member_type: VOTER } }
I20260321 14:56:13.167788 28585 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:13.179265 28588 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260321 14:56:13.187466 28588 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260321 14:56:13.188689 28339 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20260321 14:56:13.197830 28588 catalog_manager.cc:1357] Generated new cluster ID: 9194f6b947744f7e8f1409197974cefe
I20260321 14:56:13.198153 28588 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260321 14:56:13.218021 28588 catalog_manager.cc:1380] Generated new certificate authority record
I20260321 14:56:13.219816 28588 catalog_manager.cc:1514] Loading token signing keys...
I20260321 14:56:13.238214 28588 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0: Generated new TSK 0
I20260321 14:56:13.238920 28588 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260321 14:56:13.255232 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:13.262542 28601 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:13.265411 28602 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:13.270090 28604 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:13.271595 28339 server_base.cc:1061] running on GCE node
I20260321 14:56:13.272755 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:13.272958 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:13.273168 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104973273133 us; error 0 us; skew 500 ppm
I20260321 14:56:13.273932 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:13.277288 28339 webserver.cc:492] Webserver started at http://127.27.172.193:32825/ using document root <none> and password file <none>
I20260321 14:56:13.277985 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:13.278191 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:13.278481 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:13.279567 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root/instance:
uuid: "f51769721b0747ee9f8e2b864419b197"
format_stamp: "Formatted at 2026-03-21 14:56:13 on dist-test-slave-blhp"
I20260321 14:56:13.284405 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
I20260321 14:56:13.288331 28609 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:13.289398 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.002s
I20260321 14:56:13.289722 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root
uuid: "f51769721b0747ee9f8e2b864419b197"
format_stamp: "Formatted at 2026-03-21 14:56:13 on dist-test-slave-blhp"
I20260321 14:56:13.290007 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestTimestampPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:13.301899 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:13.303098 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:13.304791 28339 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260321 14:56:13.307313 28339 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260321 14:56:13.307495 28339 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:13.307807 28339 ts_tablet_manager.cc:616] Registered 0 tablets
I20260321 14:56:13.307946 28339 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:13.370656 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.193:42489
I20260321 14:56:13.370793 28671 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.193:42489 every 8 connection(s)
I20260321 14:56:13.389632 28672 heartbeater.cc:344] Connected to a master server at 127.27.172.254:46299
I20260321 14:56:13.390008 28672 heartbeater.cc:461] Registering TS with master...
I20260321 14:56:13.390805 28672 heartbeater.cc:507] Master 127.27.172.254:46299 requested a full tablet report, sending...
I20260321 14:56:13.392928 28545 ts_manager.cc:194] Registered new tserver with Master: f51769721b0747ee9f8e2b864419b197 (127.27.172.193:42489)
I20260321 14:56:13.393779 28339 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.018938338s
I20260321 14:56:13.394717 28545 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51106
I20260321 14:56:13.422101 28545 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51118:
name: "table"
schema {
  columns {
    name: "key"
    type: INT64
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "value"
    type: UNIXTIME_MICROS
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20260321 14:56:13.460781 28637 tablet_service.cc:1511] Processing CreateTablet for tablet 945837ecb6944665986036740ae4c4a0 (DEFAULT_TABLE table=table [id=5c88fef1e35843c3baa1a3eec5eeb535]), partition=RANGE (key) PARTITION UNBOUNDED
I20260321 14:56:13.462064 28637 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 945837ecb6944665986036740ae4c4a0. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:13.477989 28684 tablet_bootstrap.cc:492] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197: Bootstrap starting.
I20260321 14:56:13.482924 28684 tablet_bootstrap.cc:654] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:13.488785 28684 tablet_bootstrap.cc:492] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197: No bootstrap required, opened a new log
I20260321 14:56:13.489413 28684 ts_tablet_manager.cc:1403] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197: Time spent bootstrapping tablet: real 0.012s	user 0.004s	sys 0.005s
I20260321 14:56:13.491421 28684 raft_consensus.cc:359] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f51769721b0747ee9f8e2b864419b197" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 42489 } }
I20260321 14:56:13.492123 28684 raft_consensus.cc:385] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:13.492439 28684 raft_consensus.cc:740] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f51769721b0747ee9f8e2b864419b197, State: Initialized, Role: FOLLOWER
I20260321 14:56:13.493136 28684 consensus_queue.cc:260] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f51769721b0747ee9f8e2b864419b197" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 42489 } }
I20260321 14:56:13.493655 28684 raft_consensus.cc:399] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:13.493898 28684 raft_consensus.cc:493] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:13.494163 28684 raft_consensus.cc:3060] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:13.499396 28684 raft_consensus.cc:515] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f51769721b0747ee9f8e2b864419b197" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 42489 } }
I20260321 14:56:13.500202 28684 leader_election.cc:304] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f51769721b0747ee9f8e2b864419b197; no voters: 
I20260321 14:56:13.502673 28684 leader_election.cc:290] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:13.503233 28686 raft_consensus.cc:2804] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:13.506286 28672 heartbeater.cc:499] Master 127.27.172.254:46299 was elected leader, sending a full tablet report...
I20260321 14:56:13.507133 28684 ts_tablet_manager.cc:1434] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197: Time spent starting tablet: real 0.017s	user 0.018s	sys 0.000s
I20260321 14:56:13.507344 28686 raft_consensus.cc:697] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 1 LEADER]: Becoming Leader. State: Replica: f51769721b0747ee9f8e2b864419b197, State: Running, Role: LEADER
I20260321 14:56:13.508216 28686 consensus_queue.cc:237] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f51769721b0747ee9f8e2b864419b197" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 42489 } }
I20260321 14:56:13.517835 28545 catalog_manager.cc:5671] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 reported cstate change: term changed from 0 to 1, leader changed from <none> to f51769721b0747ee9f8e2b864419b197 (127.27.172.193). New cstate: current_term: 1 leader_uuid: "f51769721b0747ee9f8e2b864419b197" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f51769721b0747ee9f8e2b864419b197" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 42489 } health_report { overall_health: HEALTHY } } }
I20260321 14:56:15.480566 28339 tablet_server.cc:179] TabletServer@127.27.172.193:0 shutting down...
I20260321 14:56:15.508309 28339 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20260321 14:56:15.509485 28339 tablet_replica.cc:333] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197: stopping tablet replica
I20260321 14:56:15.511917 28339 raft_consensus.cc:2243] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:15.512785 28339 raft_consensus.cc:2272] T 945837ecb6944665986036740ae4c4a0 P f51769721b0747ee9f8e2b864419b197 [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:15.536650 28339 tablet_server.cc:196] TabletServer@127.27.172.193:0 shutdown complete.
I20260321 14:56:15.548027 28339 master.cc:562] Master@127.27.172.254:46299 shutting down...
I20260321 14:56:15.572162 28339 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:15.572772 28339 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0 [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:15.573137 28339 tablet_replica.cc:333] T 00000000000000000000000000000000 P 1ea91bb50b4e415d969f7772067fc6b0: stopping tablet replica
I20260321 14:56:15.592959 28339 master.cc:584] Master@127.27.172.254:46299 shutdown complete.
[       OK ] PredicateTest.TestTimestampPredicates (2634 ms)
[ RUN      ] PredicateTest.TestDecimalPredicates
I20260321 14:56:15.623581 28339 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.27.172.254:45975
I20260321 14:56:15.625011 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:15.632949 28693 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:15.633595 28694 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:15.636708 28696 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:15.638129 28339 server_base.cc:1061] running on GCE node
I20260321 14:56:15.639341 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:15.639580 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:15.640079 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104975639994 us; error 0 us; skew 500 ppm
I20260321 14:56:15.640935 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:15.644026 28339 webserver.cc:492] Webserver started at http://127.27.172.254:33981/ using document root <none> and password file <none>
I20260321 14:56:15.644572 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:15.644798 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:15.645074 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:15.646137 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/master-0-root/instance:
uuid: "2045569c618545c7bf154098eea15e54"
format_stamp: "Formatted at 2026-03-21 14:56:15 on dist-test-slave-blhp"
I20260321 14:56:15.652719 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.002s
I20260321 14:56:15.656286 28701 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:15.657254 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.002s
I20260321 14:56:15.657657 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/master-0-root
uuid: "2045569c618545c7bf154098eea15e54"
format_stamp: "Formatted at 2026-03-21 14:56:15 on dist-test-slave-blhp"
I20260321 14:56:15.658180 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:15.682053 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:15.683152 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:15.732021 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.254:45975
I20260321 14:56:15.732132 28752 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.254:45975 every 8 connection(s)
I20260321 14:56:15.736373 28753 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:15.751892 28753 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54: Bootstrap starting.
I20260321 14:56:15.758383 28753 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:15.763619 28753 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54: No bootstrap required, opened a new log
I20260321 14:56:15.766137 28753 raft_consensus.cc:359] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2045569c618545c7bf154098eea15e54" member_type: VOTER }
I20260321 14:56:15.766675 28753 raft_consensus.cc:385] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:15.766930 28753 raft_consensus.cc:740] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2045569c618545c7bf154098eea15e54, State: Initialized, Role: FOLLOWER
I20260321 14:56:15.767868 28753 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2045569c618545c7bf154098eea15e54" member_type: VOTER }
I20260321 14:56:15.768966 28753 raft_consensus.cc:399] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:15.769603 28753 raft_consensus.cc:493] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:15.770051 28753 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:15.778417 28753 raft_consensus.cc:515] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2045569c618545c7bf154098eea15e54" member_type: VOTER }
I20260321 14:56:15.779258 28753 leader_election.cc:304] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 2045569c618545c7bf154098eea15e54; no voters: 
I20260321 14:56:15.780694 28753 leader_election.cc:290] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:15.781013 28756 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:15.782917 28756 raft_consensus.cc:697] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 1 LEADER]: Becoming Leader. State: Replica: 2045569c618545c7bf154098eea15e54, State: Running, Role: LEADER
I20260321 14:56:15.783723 28756 consensus_queue.cc:237] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2045569c618545c7bf154098eea15e54" member_type: VOTER }
I20260321 14:56:15.784142 28753 sys_catalog.cc:565] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [sys.catalog]: configured and running, proceeding with master startup.
I20260321 14:56:15.786437 28757 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "2045569c618545c7bf154098eea15e54" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2045569c618545c7bf154098eea15e54" member_type: VOTER } }
I20260321 14:56:15.787006 28757 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:15.796927 28761 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260321 14:56:15.796787 28758 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 2045569c618545c7bf154098eea15e54. Latest consensus state: current_term: 1 leader_uuid: "2045569c618545c7bf154098eea15e54" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2045569c618545c7bf154098eea15e54" member_type: VOTER } }
I20260321 14:56:15.797536 28758 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:15.802495 28761 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260321 14:56:15.808126 28339 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20260321 14:56:15.812945 28761 catalog_manager.cc:1357] Generated new cluster ID: d9b040331f3b48aca84cdaa8e0be75dc
I20260321 14:56:15.813254 28761 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260321 14:56:15.834764 28761 catalog_manager.cc:1380] Generated new certificate authority record
I20260321 14:56:15.836328 28761 catalog_manager.cc:1514] Loading token signing keys...
I20260321 14:56:15.857570 28761 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54: Generated new TSK 0
I20260321 14:56:15.858258 28761 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260321 14:56:15.874971 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:15.882782 28775 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:15.883175 28776 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:15.886178 28339 server_base.cc:1061] running on GCE node
W20260321 14:56:15.887096 28778 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:15.888218 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:15.888531 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:15.888731 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104975888716 us; error 0 us; skew 500 ppm
I20260321 14:56:15.889453 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:15.892614 28339 webserver.cc:492] Webserver started at http://127.27.172.193:42085/ using document root <none> and password file <none>
I20260321 14:56:15.893290 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:15.893524 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:15.893844 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:15.895273 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root/instance:
uuid: "9248e34322954a42bec6a2784835701b"
format_stamp: "Formatted at 2026-03-21 14:56:15 on dist-test-slave-blhp"
I20260321 14:56:15.901314 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.005s	sys 0.000s
I20260321 14:56:15.906121 28783 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:15.907133 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.000s
I20260321 14:56:15.907505 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root
uuid: "9248e34322954a42bec6a2784835701b"
format_stamp: "Formatted at 2026-03-21 14:56:15 on dist-test-slave-blhp"
I20260321 14:56:15.907908 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.PredicateTest.TestDecimalPredicates.1774104968283944-28339-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:15.924681 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:15.926035 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:15.928066 28339 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260321 14:56:15.931135 28339 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260321 14:56:15.931399 28339 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:15.931715 28339 ts_tablet_manager.cc:616] Registered 0 tablets
I20260321 14:56:15.931919 28339 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.001s
I20260321 14:56:15.988996 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.193:39611
I20260321 14:56:15.989045 28845 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.193:39611 every 8 connection(s)
I20260321 14:56:16.012483 28846 heartbeater.cc:344] Connected to a master server at 127.27.172.254:45975
I20260321 14:56:16.013228 28846 heartbeater.cc:461] Registering TS with master...
I20260321 14:56:16.014281 28846 heartbeater.cc:507] Master 127.27.172.254:45975 requested a full tablet report, sending...
I20260321 14:56:16.016471 28718 ts_manager.cc:194] Registered new tserver with Master: 9248e34322954a42bec6a2784835701b (127.27.172.193:39611)
I20260321 14:56:16.017323 28339 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.023666706s
I20260321 14:56:16.019527 28718 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:60092
I20260321 14:56:16.043560 28718 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:60106:
name: "table"
schema {
  columns {
    name: "key"
    type: INT64
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "value"
    type: DECIMAL128
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    type_attributes {
      precision: 38
      scale: 2
    }
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20260321 14:56:16.082484 28811 tablet_service.cc:1511] Processing CreateTablet for tablet a176854b27a3471b9da8d7357e33cec0 (DEFAULT_TABLE table=table [id=6ea005b8733944c5986b9ae75e1c85a6]), partition=RANGE (key) PARTITION UNBOUNDED
I20260321 14:56:16.083576 28811 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a176854b27a3471b9da8d7357e33cec0. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:16.098191 28858 tablet_bootstrap.cc:492] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b: Bootstrap starting.
I20260321 14:56:16.103442 28858 tablet_bootstrap.cc:654] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:16.108202 28858 tablet_bootstrap.cc:492] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b: No bootstrap required, opened a new log
I20260321 14:56:16.108909 28858 ts_tablet_manager.cc:1403] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b: Time spent bootstrapping tablet: real 0.011s	user 0.005s	sys 0.004s
I20260321 14:56:16.111187 28858 raft_consensus.cc:359] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9248e34322954a42bec6a2784835701b" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39611 } }
I20260321 14:56:16.111646 28858 raft_consensus.cc:385] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:16.111941 28858 raft_consensus.cc:740] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9248e34322954a42bec6a2784835701b, State: Initialized, Role: FOLLOWER
I20260321 14:56:16.112535 28858 consensus_queue.cc:260] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9248e34322954a42bec6a2784835701b" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39611 } }
I20260321 14:56:16.113023 28858 raft_consensus.cc:399] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:16.113277 28858 raft_consensus.cc:493] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:16.113548 28858 raft_consensus.cc:3060] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:16.121318 28858 raft_consensus.cc:515] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9248e34322954a42bec6a2784835701b" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39611 } }
I20260321 14:56:16.122094 28858 leader_election.cc:304] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9248e34322954a42bec6a2784835701b; no voters: 
I20260321 14:56:16.125451 28858 leader_election.cc:290] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:16.125811 28860 raft_consensus.cc:2804] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:16.130434 28846 heartbeater.cc:499] Master 127.27.172.254:45975 was elected leader, sending a full tablet report...
I20260321 14:56:16.131287 28858 ts_tablet_manager.cc:1434] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b: Time spent starting tablet: real 0.022s	user 0.014s	sys 0.008s
I20260321 14:56:16.134888 28860 raft_consensus.cc:697] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 1 LEADER]: Becoming Leader. State: Replica: 9248e34322954a42bec6a2784835701b, State: Running, Role: LEADER
I20260321 14:56:16.135871 28860 consensus_queue.cc:237] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9248e34322954a42bec6a2784835701b" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39611 } }
I20260321 14:56:16.142791 28717 catalog_manager.cc:5671] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b reported cstate change: term changed from 0 to 1, leader changed from <none> to 9248e34322954a42bec6a2784835701b (127.27.172.193). New cstate: current_term: 1 leader_uuid: "9248e34322954a42bec6a2784835701b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9248e34322954a42bec6a2784835701b" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39611 } health_report { overall_health: HEALTHY } } }
I20260321 14:56:17.922714 28339 tablet_server.cc:179] TabletServer@127.27.172.193:0 shutting down...
I20260321 14:56:17.954154 28339 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20260321 14:56:17.955049 28339 tablet_replica.cc:333] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b: stopping tablet replica
I20260321 14:56:17.955940 28339 raft_consensus.cc:2243] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:17.956590 28339 raft_consensus.cc:2272] T a176854b27a3471b9da8d7357e33cec0 P 9248e34322954a42bec6a2784835701b [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:17.977694 28339 tablet_server.cc:196] TabletServer@127.27.172.193:0 shutdown complete.
I20260321 14:56:17.989329 28339 master.cc:562] Master@127.27.172.254:45975 shutting down...
I20260321 14:56:18.012981 28339 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:18.013546 28339 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54 [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:18.013862 28339 tablet_replica.cc:333] T 00000000000000000000000000000000 P 2045569c618545c7bf154098eea15e54: stopping tablet replica
I20260321 14:56:18.034256 28339 master.cc:584] Master@127.27.172.254:45975 shutdown complete.
[       OK ] PredicateTest.TestDecimalPredicates (2435 ms)
[----------] 3 tests from PredicateTest (9575 ms total)

[----------] 1 test from BloomFilterPredicateTest
[ RUN      ] BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark
I20260321 14:56:18.058104 28339 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.27.172.254:38087
I20260321 14:56:18.059484 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:18.064963 28867 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:18.065228 28868 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:18.067070 28870 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:18.069557 28339 server_base.cc:1061] running on GCE node
I20260321 14:56:18.070981 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:18.071214 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:18.071632 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104978071593 us; error 0 us; skew 500 ppm
I20260321 14:56:18.072396 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:18.075592 28339 webserver.cc:492] Webserver started at http://127.27.172.254:45487/ using document root <none> and password file <none>
I20260321 14:56:18.076524 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:18.076833 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:18.077138 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:18.078316 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/master-0-root/instance:
uuid: "2387afe3a1c8411c93ee63472fc845e5"
format_stamp: "Formatted at 2026-03-21 14:56:18 on dist-test-slave-blhp"
I20260321 14:56:18.083195 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.007s	sys 0.000s
I20260321 14:56:18.086931 28875 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:18.087929 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.000s	sys 0.003s
I20260321 14:56:18.088194 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/master-0-root
uuid: "2387afe3a1c8411c93ee63472fc845e5"
format_stamp: "Formatted at 2026-03-21 14:56:18 on dist-test-slave-blhp"
I20260321 14:56:18.088460 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:18.124197 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:18.125237 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:18.181623 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.254:38087
I20260321 14:56:18.181744 28926 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.254:38087 every 8 connection(s)
I20260321 14:56:18.186161 28927 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:18.207158 28927 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5: Bootstrap starting.
I20260321 14:56:18.214073 28927 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:18.220247 28927 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5: No bootstrap required, opened a new log
I20260321 14:56:18.223006 28927 raft_consensus.cc:359] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2387afe3a1c8411c93ee63472fc845e5" member_type: VOTER }
I20260321 14:56:18.223577 28927 raft_consensus.cc:385] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:18.223898 28927 raft_consensus.cc:740] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2387afe3a1c8411c93ee63472fc845e5, State: Initialized, Role: FOLLOWER
I20260321 14:56:18.224552 28927 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2387afe3a1c8411c93ee63472fc845e5" member_type: VOTER }
I20260321 14:56:18.225184 28927 raft_consensus.cc:399] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:18.225502 28927 raft_consensus.cc:493] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:18.225826 28927 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:18.234783 28927 raft_consensus.cc:515] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2387afe3a1c8411c93ee63472fc845e5" member_type: VOTER }
I20260321 14:56:18.235945 28927 leader_election.cc:304] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 2387afe3a1c8411c93ee63472fc845e5; no voters: 
I20260321 14:56:18.237792 28927 leader_election.cc:290] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:18.238106 28930 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:18.240201 28930 raft_consensus.cc:697] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 1 LEADER]: Becoming Leader. State: Replica: 2387afe3a1c8411c93ee63472fc845e5, State: Running, Role: LEADER
I20260321 14:56:18.240989 28930 consensus_queue.cc:237] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2387afe3a1c8411c93ee63472fc845e5" member_type: VOTER }
I20260321 14:56:18.241518 28927 sys_catalog.cc:565] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [sys.catalog]: configured and running, proceeding with master startup.
I20260321 14:56:18.243885 28931 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "2387afe3a1c8411c93ee63472fc845e5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2387afe3a1c8411c93ee63472fc845e5" member_type: VOTER } }
I20260321 14:56:18.244460 28931 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:18.246276 28932 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 2387afe3a1c8411c93ee63472fc845e5. Latest consensus state: current_term: 1 leader_uuid: "2387afe3a1c8411c93ee63472fc845e5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2387afe3a1c8411c93ee63472fc845e5" member_type: VOTER } }
I20260321 14:56:18.247032 28932 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:18.253059 28935 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260321 14:56:18.262918 28935 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260321 14:56:18.278072 28935 catalog_manager.cc:1357] Generated new cluster ID: 8e3a4acf174a436c8f0ed5849977385d
I20260321 14:56:18.278544 28935 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260321 14:56:18.284329 28339 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20260321 14:56:18.320942 28935 catalog_manager.cc:1380] Generated new certificate authority record
I20260321 14:56:18.322562 28935 catalog_manager.cc:1514] Loading token signing keys...
I20260321 14:56:18.336539 28935 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5: Generated new TSK 0
I20260321 14:56:18.337203 28935 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260321 14:56:18.351626 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:18.359654 28949 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:18.360273 28950 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:18.369545 28339 server_base.cc:1061] running on GCE node
W20260321 14:56:18.370097 28953 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:18.371134 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:18.371321 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:18.371538 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104978371497 us; error 0 us; skew 500 ppm
I20260321 14:56:18.372627 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:18.378214 28339 webserver.cc:492] Webserver started at http://127.27.172.193:36763/ using document root <none> and password file <none>
I20260321 14:56:18.378906 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:18.379155 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:18.379515 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:18.380684 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/ts-0-root/instance:
uuid: "2cb37775b78f4df7a8fe15a87533168f"
format_stamp: "Formatted at 2026-03-21 14:56:18 on dist-test-slave-blhp"
I20260321 14:56:18.385607 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20260321 14:56:18.390069 28957 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:18.391036 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.000s	sys 0.002s
I20260321 14:56:18.391345 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/ts-0-root
uuid: "2cb37775b78f4df7a8fe15a87533168f"
format_stamp: "Formatted at 2026-03-21 14:56:18 on dist-test-slave-blhp"
I20260321 14:56:18.391655 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark.1774104968283944-28339-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:18.421850 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:18.422917 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:18.424607 28339 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260321 14:56:18.427278 28339 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260321 14:56:18.427470 28339 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:18.427791 28339 ts_tablet_manager.cc:616] Registered 0 tablets
I20260321 14:56:18.427934 28339 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:18.492798 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.193:40069
I20260321 14:56:18.492883 29019 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.193:40069 every 8 connection(s)
I20260321 14:56:18.548827 29021 heartbeater.cc:344] Connected to a master server at 127.27.172.254:38087
I20260321 14:56:18.549831 29021 heartbeater.cc:461] Registering TS with master...
I20260321 14:56:18.551177 29021 heartbeater.cc:507] Master 127.27.172.254:38087 requested a full tablet report, sending...
I20260321 14:56:18.554210 28892 ts_manager.cc:194] Registered new tserver with Master: 2cb37775b78f4df7a8fe15a87533168f (127.27.172.193:40069)
I20260321 14:56:18.555188 28339 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.023349477s
I20260321 14:56:18.556923 28892 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:47986
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/predicate-test.cc:1510: Skipped
test is skipped; set KUDU_ALLOW_SLOW_TESTS=1 to run
I20260321 14:56:18.592298 28339 tablet_server.cc:179] TabletServer@127.27.172.193:0 shutting down...
I20260321 14:56:18.638411 28339 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20260321 14:56:18.659009 28339 tablet_server.cc:196] TabletServer@127.27.172.193:0 shutdown complete.
I20260321 14:56:18.671120 28339 master.cc:562] Master@127.27.172.254:38087 shutting down...
I20260321 14:56:18.705837 28339 raft_consensus.cc:2243] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:56:18.706863 28339 raft_consensus.cc:2272] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5 [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:56:18.707464 28339 tablet_replica.cc:333] T 00000000000000000000000000000000 P 2387afe3a1c8411c93ee63472fc845e5: stopping tablet replica
I20260321 14:56:18.731482 28339 master.cc:584] Master@127.27.172.254:38087 shutdown complete.
[  SKIPPED ] BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark (710 ms)
[----------] 1 test from BloomFilterPredicateTest (710 ms total)

[----------] 1 test from ParameterizedBloomFilterPredicateTest
[ RUN      ] ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings/3
I20260321 14:56:18.769698 28339 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.27.172.254:35293
I20260321 14:56:18.770969 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:18.778071 29032 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:18.778187 29031 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:18.781632 28339 server_base.cc:1061] running on GCE node
W20260321 14:56:18.781599 29034 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:18.782665 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:18.782857 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:18.783030 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104978783016 us; error 0 us; skew 500 ppm
I20260321 14:56:18.783589 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:18.786618 28339 webserver.cc:492] Webserver started at http://127.27.172.254:46483/ using document root <none> and password file <none>
I20260321 14:56:18.787160 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:18.787367 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:18.787668 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:18.789259 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/master-0-root/instance:
uuid: "a8dda96a80f145de9917243caea6b61e"
format_stamp: "Formatted at 2026-03-21 14:56:18 on dist-test-slave-blhp"
I20260321 14:56:18.794553 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.006s	sys 0.001s
I20260321 14:56:18.798331 29039 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:18.799239 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.000s
I20260321 14:56:18.799599 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/master-0-root
uuid: "a8dda96a80f145de9917243caea6b61e"
format_stamp: "Formatted at 2026-03-21 14:56:18 on dist-test-slave-blhp"
I20260321 14:56:18.799989 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:18.830974 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:18.832592 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:18.893604 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.254:35293
I20260321 14:56:18.893824 29090 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.254:35293 every 8 connection(s)
I20260321 14:56:18.899008 29091 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:18.911553 29091 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e: Bootstrap starting.
I20260321 14:56:18.916967 29091 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:18.922005 29091 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e: No bootstrap required, opened a new log
I20260321 14:56:18.924374 29091 raft_consensus.cc:359] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a8dda96a80f145de9917243caea6b61e" member_type: VOTER }
I20260321 14:56:18.924894 29091 raft_consensus.cc:385] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:18.925170 29091 raft_consensus.cc:740] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8dda96a80f145de9917243caea6b61e, State: Initialized, Role: FOLLOWER
I20260321 14:56:18.925827 29091 consensus_queue.cc:260] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a8dda96a80f145de9917243caea6b61e" member_type: VOTER }
I20260321 14:56:18.926322 29091 raft_consensus.cc:399] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:18.926600 29091 raft_consensus.cc:493] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:18.926970 29091 raft_consensus.cc:3060] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:18.932375 29091 raft_consensus.cc:515] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a8dda96a80f145de9917243caea6b61e" member_type: VOTER }
I20260321 14:56:18.933022 29091 leader_election.cc:304] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a8dda96a80f145de9917243caea6b61e; no voters: 
I20260321 14:56:18.934310 29091 leader_election.cc:290] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:18.934839 29094 raft_consensus.cc:2804] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:18.936447 29094 raft_consensus.cc:697] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 LEADER]: Becoming Leader. State: Replica: a8dda96a80f145de9917243caea6b61e, State: Running, Role: LEADER
I20260321 14:56:18.937361 29094 consensus_queue.cc:237] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a8dda96a80f145de9917243caea6b61e" member_type: VOTER }
I20260321 14:56:18.937951 29091 sys_catalog.cc:565] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [sys.catalog]: configured and running, proceeding with master startup.
I20260321 14:56:18.940774 29096 sys_catalog.cc:455] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [sys.catalog]: SysCatalogTable state changed. Reason: New leader a8dda96a80f145de9917243caea6b61e. Latest consensus state: current_term: 1 leader_uuid: "a8dda96a80f145de9917243caea6b61e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a8dda96a80f145de9917243caea6b61e" member_type: VOTER } }
I20260321 14:56:18.940699 29095 sys_catalog.cc:455] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "a8dda96a80f145de9917243caea6b61e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a8dda96a80f145de9917243caea6b61e" member_type: VOTER } }
I20260321 14:56:18.941429 29096 sys_catalog.cc:458] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:18.941499 29095 sys_catalog.cc:458] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [sys.catalog]: This master's current role is: LEADER
I20260321 14:56:18.950784 29099 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260321 14:56:18.959241 29099 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260321 14:56:18.961604 28339 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20260321 14:56:18.970821 29099 catalog_manager.cc:1357] Generated new cluster ID: 256c96c3aafc4dea948c17561fc7777a
I20260321 14:56:18.971197 29099 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260321 14:56:18.990969 29099 catalog_manager.cc:1380] Generated new certificate authority record
I20260321 14:56:18.992753 29099 catalog_manager.cc:1514] Loading token signing keys...
I20260321 14:56:19.018491 29099 catalog_manager.cc:6044] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e: Generated new TSK 0
I20260321 14:56:19.019207 29099 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260321 14:56:19.028453 28339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260321 14:56:19.037102 29112 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:19.040652 29113 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260321 14:56:19.041826 29115 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260321 14:56:19.042685 28339 server_base.cc:1061] running on GCE node
I20260321 14:56:19.043792 28339 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20260321 14:56:19.044054 28339 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20260321 14:56:19.044358 28339 hybrid_clock.cc:648] HybridClock initialized: now 1774104979044339 us; error 0 us; skew 500 ppm
I20260321 14:56:19.045092 28339 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260321 14:56:19.048354 28339 webserver.cc:492] Webserver started at http://127.27.172.193:40883/ using document root <none> and password file <none>
I20260321 14:56:19.049006 28339 fs_manager.cc:362] Metadata directory not provided
I20260321 14:56:19.049252 28339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260321 14:56:19.049607 28339 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260321 14:56:19.051358 28339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/ts-0-root/instance:
uuid: "dbf7f9b7ffe2493b8b236703cb7b8add"
format_stamp: "Formatted at 2026-03-21 14:56:19 on dist-test-slave-blhp"
I20260321 14:56:19.057637 28339 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.000s
I20260321 14:56:19.062204 29120 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:19.063187 28339 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.001s	sys 0.001s
I20260321 14:56:19.063556 28339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/ts-0-root
uuid: "dbf7f9b7ffe2493b8b236703cb7b8add"
format_stamp: "Formatted at 2026-03-21 14:56:19 on dist-test-slave-blhp"
I20260321 14:56:19.063987 28339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260321 14:56:19.090214 28339 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260321 14:56:19.091989 28339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260321 14:56:19.094123 28339 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260321 14:56:19.098028 28339 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260321 14:56:19.098392 28339 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:19.098775 28339 ts_tablet_manager.cc:616] Registered 0 tablets
I20260321 14:56:19.099035 28339 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260321 14:56:19.164085 28339 rpc_server.cc:307] RPC server started. Bound to: 127.27.172.193:39785
I20260321 14:56:19.164242 29182 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.27.172.193:39785 every 8 connection(s)
I20260321 14:56:19.183821 29183 heartbeater.cc:344] Connected to a master server at 127.27.172.254:35293
I20260321 14:56:19.184216 29183 heartbeater.cc:461] Registering TS with master...
I20260321 14:56:19.185031 29183 heartbeater.cc:507] Master 127.27.172.254:35293 requested a full tablet report, sending...
I20260321 14:56:19.187354 29056 ts_manager.cc:194] Registered new tserver with Master: dbf7f9b7ffe2493b8b236703cb7b8add (127.27.172.193:39785)
I20260321 14:56:19.188673 28339 internal_mini_cluster.cc:371] 1 TS(s) registered with all masters after 0.021030165s
I20260321 14:56:19.190399 29056 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:43958
I20260321 14:56:19.217989 29056 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:43966:
name: "table"
schema {
  columns {
    name: "key"
    type: INT64
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "value"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20260321 14:56:19.263576 29148 tablet_service.cc:1511] Processing CreateTablet for tablet e59190f4c3f24a398b437e1b1e164d65 (DEFAULT_TABLE table=table [id=2a08f89e700b463ba1485bb16f27a711]), partition=RANGE (key) PARTITION UNBOUNDED
I20260321 14:56:19.264909 29148 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet e59190f4c3f24a398b437e1b1e164d65. 1 dirs total, 0 dirs full, 0 dirs failed
I20260321 14:56:19.282697 29195 tablet_bootstrap.cc:492] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: Bootstrap starting.
I20260321 14:56:19.288246 29195 tablet_bootstrap.cc:654] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: Neither blocks nor log segments found. Creating new log.
I20260321 14:56:19.294166 29195 tablet_bootstrap.cc:492] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: No bootstrap required, opened a new log
I20260321 14:56:19.295012 29195 ts_tablet_manager.cc:1403] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: Time spent bootstrapping tablet: real 0.013s	user 0.010s	sys 0.000s
I20260321 14:56:19.298619 29195 raft_consensus.cc:359] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dbf7f9b7ffe2493b8b236703cb7b8add" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39785 } }
I20260321 14:56:19.299150 29195 raft_consensus.cc:385] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260321 14:56:19.299404 29195 raft_consensus.cc:740] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dbf7f9b7ffe2493b8b236703cb7b8add, State: Initialized, Role: FOLLOWER
I20260321 14:56:19.300019 29195 consensus_queue.cc:260] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dbf7f9b7ffe2493b8b236703cb7b8add" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39785 } }
I20260321 14:56:19.300536 29195 raft_consensus.cc:399] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260321 14:56:19.300806 29195 raft_consensus.cc:493] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260321 14:56:19.301095 29195 raft_consensus.cc:3060] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 0 FOLLOWER]: Advancing to term 1
I20260321 14:56:19.308331 29195 raft_consensus.cc:515] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dbf7f9b7ffe2493b8b236703cb7b8add" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39785 } }
I20260321 14:56:19.309229 29195 leader_election.cc:304] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: dbf7f9b7ffe2493b8b236703cb7b8add; no voters: 
I20260321 14:56:19.310900 29195 leader_election.cc:290] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260321 14:56:19.311426 29197 raft_consensus.cc:2804] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 FOLLOWER]: Leader election won for term 1
I20260321 14:56:19.313198 29183 heartbeater.cc:499] Master 127.27.172.254:35293 was elected leader, sending a full tablet report...
I20260321 14:56:19.313946 29195 ts_tablet_manager.cc:1434] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: Time spent starting tablet: real 0.018s	user 0.010s	sys 0.009s
I20260321 14:56:19.315143 29197 raft_consensus.cc:697] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 LEADER]: Becoming Leader. State: Replica: dbf7f9b7ffe2493b8b236703cb7b8add, State: Running, Role: LEADER
I20260321 14:56:19.315964 29197 consensus_queue.cc:237] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dbf7f9b7ffe2493b8b236703cb7b8add" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39785 } }
I20260321 14:56:19.323361 29055 catalog_manager.cc:5671] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add reported cstate change: term changed from 0 to 1, leader changed from <none> to dbf7f9b7ffe2493b8b236703cb7b8add (127.27.172.193). New cstate: current_term: 1 leader_uuid: "dbf7f9b7ffe2493b8b236703cb7b8add" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dbf7f9b7ffe2493b8b236703cb7b8add" member_type: VOTER last_known_addr { host: "127.27.172.193" port: 39785 } health_report { overall_health: HEALTHY } } }
W20260321 14:56:22.938865 29179 debug-util.cc:398] Leaking SignalData structure 0x7b08000c8480 after lost signal to thread 28342
W20260321 14:56:22.939769 29179 debug-util.cc:398] Leaking SignalData structure 0x7b0800066780 after lost signal to thread 29090
W20260321 14:56:22.940518 29179 debug-util.cc:398] Leaking SignalData structure 0x7b08000cc300 after lost signal to thread 29182
I20260321 14:56:25.943374 29184 maintenance_manager.cc:419] P dbf7f9b7ffe2493b8b236703cb7b8add: Scheduling UndoDeltaBlockGCOp(e59190f4c3f24a398b437e1b1e164d65): 67404 bytes on disk
I20260321 14:56:25.946106 29125 maintenance_manager.cc:643] P dbf7f9b7ffe2493b8b236703cb7b8add: UndoDeltaBlockGCOp(e59190f4c3f24a398b437e1b1e164d65) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":370,"lbm_reads_lt_1ms":4}
W20260321 14:56:33.722926 29087 debug-util.cc:398] Leaking SignalData structure 0x7b0800032ee0 after lost signal to thread 28342
W20260321 14:56:33.724500 29087 debug-util.cc:398] Leaking SignalData structure 0x7b0800098a80 after lost signal to thread 29090
W20260321 14:56:33.725392 29087 debug-util.cc:398] Leaking SignalData structure 0x7b080003cf40 after lost signal to thread 29182
W20260321 14:56:35.175334 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=10, attempt_no=0}) took 9078 ms (client timeout 9999 ms). Trace:
W20260321 14:56:35.176198 29212 rpcz_store.cc:269] 0321 14:56:26.096794 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:56:26.097005 (+   211us) service_pool.cc:225] Handling call
0321 14:56:35.175102 (+9078097us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:56:26.099455 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:56:26.099869 (+   414us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:56:26.099889 (+    20us) write_op.cc:435] Acquired schema lock
0321 14:56:26.099897 (+     8us) tablet.cc:661] Decoding operations
0321 14:56:26.125072 (+ 25175us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:56:26.125142 (+    70us) write_op.cc:641] Partition lock acquired for write op
0321 14:56:26.125162 (+    20us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:56:26.156417 (+ 31255us) tablet.cc:700] Row locks acquired
0321 14:56:26.156448 (+    31us) write_op.cc:260] PREPARE: finished
0321 14:56:26.156835 (+   387us) write_op.cc:270] Start()
0321 14:56:26.156954 (+   119us) write_op.cc:276] Timestamp: P: 1774104986156727 usec, L: 0
0321 14:56:26.156969 (+    15us) op_driver.cc:348] REPLICATION: starting
0321 14:56:26.157722 (+   753us) log.cc:844] Serialized 32439 byte log entry
0321 14:56:26.166615 (+  8893us) op_driver.cc:464] REPLICATION: finished
0321 14:56:26.168045 (+  1430us) write_op.cc:301] APPLY: starting
0321 14:56:26.168100 (+    55us) tablet.cc:1366] starting BulkCheckPresence
0321 14:56:30.363895 (+4195795us) tablet.cc:1369] finished BulkCheckPresence
0321 14:56:30.364013 (+   118us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:56:35.158630 (+4794617us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:56:35.161471 (+  2841us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:56:35.161532 (+    61us) write_op.cc:312] APPLY: finished
0321 14:56:35.165996 (+  4464us) log.cc:844] Serialized 8083 byte log entry
0321 14:56:35.167506 (+  1510us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:56:35.174269 (+  6763us) write_op.cc:454] Released schema lock
0321 14:56:35.174655 (+   386us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":1273,"cfile_cache_hit":4030,"cfile_cache_hit_bytes":5386095,"cfile_cache_miss":4,"cfile_cache_miss_bytes":6887,"cfile_init":1,"lbm_read_time_us":853,"lbm_reads_lt_1ms":8,"num_ops":1008,"prepare.queue_time_us":988,"prepare.run_cpu_time_us":58527,"prepare.run_wall_time_us":58591,"replication_time_us":9543,"thread_start_us":2315,"threads_started":3,"wal-append.queue_time_us":1375}]]}
W20260321 14:56:44.143236 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=11, attempt_no=0}) took 8615 ms (client timeout 9999 ms). Trace:
W20260321 14:56:44.144053 29212 rpcz_store.cc:269] 0321 14:56:35.527455 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:56:35.527727 (+   272us) service_pool.cc:225] Handling call
0321 14:56:44.142957 (+8615230us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:56:35.533048 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:56:35.533456 (+   408us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:56:35.533494 (+    38us) write_op.cc:435] Acquired schema lock
0321 14:56:35.533502 (+     8us) tablet.cc:661] Decoding operations
0321 14:56:35.560372 (+ 26870us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:56:35.560458 (+    86us) write_op.cc:641] Partition lock acquired for write op
0321 14:56:35.560479 (+    21us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:56:35.589837 (+ 29358us) tablet.cc:700] Row locks acquired
0321 14:56:35.589860 (+    23us) write_op.cc:260] PREPARE: finished
0321 14:56:35.590063 (+   203us) write_op.cc:270] Start()
0321 14:56:35.590185 (+   122us) write_op.cc:276] Timestamp: P: 1774104995590013 usec, L: 0
0321 14:56:35.590213 (+    28us) op_driver.cc:348] REPLICATION: starting
0321 14:56:35.591236 (+  1023us) log.cc:844] Serialized 32439 byte log entry
0321 14:56:35.601691 (+ 10455us) op_driver.cc:464] REPLICATION: finished
0321 14:56:35.602039 (+   348us) write_op.cc:301] APPLY: starting
0321 14:56:35.602124 (+    85us) tablet.cc:1366] starting BulkCheckPresence
0321 14:56:39.896601 (+4294477us) tablet.cc:1369] finished BulkCheckPresence
0321 14:56:39.896779 (+   178us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:56:44.128557 (+4231778us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:56:44.130973 (+  2416us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:56:44.131017 (+    44us) write_op.cc:312] APPLY: finished
0321 14:56:44.136069 (+  5052us) log.cc:844] Serialized 8083 byte log entry
0321 14:56:44.137036 (+   967us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:56:44.141873 (+  4837us) write_op.cc:454] Released schema lock
0321 14:56:44.142254 (+   381us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":134,"cfile_cache_hit":4034,"cfile_cache_hit_bytes":5396980,"cfile_cache_miss":1,"cfile_cache_miss_bytes":4106,"lbm_read_time_us":244,"lbm_reads_lt_1ms":1,"num_ops":1008,"prepare.queue_time_us":1437,"prepare.run_cpu_time_us":58277,"prepare.run_wall_time_us":58411,"replication_time_us":11303,"thread_start_us":1381,"threads_started":2,"wal-append.queue_time_us":917}]]}
W20260321 14:56:52.933295 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=12, attempt_no=0}) took 8500 ms (client timeout 9999 ms). Trace:
W20260321 14:56:52.933888 29212 rpcz_store.cc:269] 0321 14:56:44.433081 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:56:44.433330 (+   249us) service_pool.cc:225] Handling call
0321 14:56:52.933084 (+8499754us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:56:44.436699 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:56:44.437058 (+   359us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:56:44.437085 (+    27us) write_op.cc:435] Acquired schema lock
0321 14:56:44.437093 (+     8us) tablet.cc:661] Decoding operations
0321 14:56:44.457278 (+ 20185us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:56:44.457355 (+    77us) write_op.cc:641] Partition lock acquired for write op
0321 14:56:44.457383 (+    28us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:56:44.481451 (+ 24068us) tablet.cc:700] Row locks acquired
0321 14:56:44.481472 (+    21us) write_op.cc:260] PREPARE: finished
0321 14:56:44.481687 (+   215us) write_op.cc:270] Start()
0321 14:56:44.481796 (+   109us) write_op.cc:276] Timestamp: P: 1774105004481632 usec, L: 0
0321 14:56:44.481815 (+    19us) op_driver.cc:348] REPLICATION: starting
0321 14:56:44.482699 (+   884us) log.cc:844] Serialized 32439 byte log entry
0321 14:56:44.489640 (+  6941us) op_driver.cc:464] REPLICATION: finished
0321 14:56:44.490061 (+   421us) write_op.cc:301] APPLY: starting
0321 14:56:44.490123 (+    62us) tablet.cc:1366] starting BulkCheckPresence
0321 14:56:48.647089 (+4156966us) tablet.cc:1369] finished BulkCheckPresence
0321 14:56:48.647235 (+   146us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:56:52.917666 (+4270431us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:56:52.920195 (+  2529us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:56:52.920277 (+    82us) write_op.cc:312] APPLY: finished
0321 14:56:52.925639 (+  5362us) log.cc:844] Serialized 8083 byte log entry
0321 14:56:52.926719 (+  1080us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:56:52.932129 (+  5410us) write_op.cc:454] Released schema lock
0321 14:56:52.932624 (+   495us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":173,"cfile_cache_hit":4032,"cfile_cache_hit_bytes":5388768,"num_ops":1008,"prepare.queue_time_us":835,"prepare.run_cpu_time_us":46246,"prepare.run_wall_time_us":46251,"replication_time_us":7490,"spinlock_wait_cycles":81024,"thread_start_us":1077,"threads_started":2,"wal-append.queue_time_us":1039}]]}
W20260321 14:57:02.233115 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=13, attempt_no=0}) took 9009 ms (client timeout 9999 ms). Trace:
W20260321 14:57:02.233763 29212 rpcz_store.cc:269] 0321 14:56:53.224096 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:56:53.224309 (+   213us) service_pool.cc:225] Handling call
0321 14:57:02.232927 (+9008618us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:56:53.227107 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:56:53.227617 (+   510us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:56:53.227646 (+    29us) write_op.cc:435] Acquired schema lock
0321 14:56:53.227654 (+     8us) tablet.cc:661] Decoding operations
0321 14:56:53.250017 (+ 22363us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:56:53.250099 (+    82us) write_op.cc:641] Partition lock acquired for write op
0321 14:56:53.250119 (+    20us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:56:53.278253 (+ 28134us) tablet.cc:700] Row locks acquired
0321 14:56:53.278275 (+    22us) write_op.cc:260] PREPARE: finished
0321 14:56:53.278473 (+   198us) write_op.cc:270] Start()
0321 14:56:53.278584 (+   111us) write_op.cc:276] Timestamp: P: 1774105013278423 usec, L: 0
0321 14:56:53.278605 (+    21us) op_driver.cc:348] REPLICATION: starting
0321 14:56:53.279520 (+   915us) log.cc:844] Serialized 32439 byte log entry
0321 14:56:53.288258 (+  8738us) op_driver.cc:464] REPLICATION: finished
0321 14:56:53.289206 (+   948us) write_op.cc:301] APPLY: starting
0321 14:56:53.289305 (+    99us) tablet.cc:1366] starting BulkCheckPresence
0321 14:56:57.513698 (+4224393us) tablet.cc:1369] finished BulkCheckPresence
0321 14:56:57.513799 (+   101us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:57:02.216963 (+4703164us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:57:02.219455 (+  2492us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:57:02.219514 (+    59us) write_op.cc:312] APPLY: finished
0321 14:57:02.225385 (+  5871us) log.cc:844] Serialized 8083 byte log entry
0321 14:57:02.227192 (+  1807us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:57:02.232091 (+  4899us) write_op.cc:454] Released schema lock
0321 14:57:02.232466 (+   375us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":468,"cfile_cache_hit":4034,"cfile_cache_hit_bytes":5396980,"cfile_cache_miss":1,"cfile_cache_miss_bytes":4106,"lbm_read_time_us":170,"lbm_reads_lt_1ms":1,"num_ops":1008,"prepare.queue_time_us":1021,"prepare.run_cpu_time_us":52738,"prepare.run_wall_time_us":52741,"replication_time_us":9529,"spinlock_wait_cycles":28800,"thread_start_us":1738,"threads_started":2,"wal-append.queue_time_us":1607}]]}
W20260321 14:57:11.295403 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=14, attempt_no=0}) took 8741 ms (client timeout 9999 ms). Trace:
W20260321 14:57:11.295887 29212 rpcz_store.cc:269] 0321 14:57:02.553651 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:57:02.553965 (+   314us) service_pool.cc:225] Handling call
0321 14:57:11.295219 (+8741254us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:57:02.557083 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:57:02.557417 (+   334us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:57:02.557445 (+    28us) write_op.cc:435] Acquired schema lock
0321 14:57:02.557452 (+     7us) tablet.cc:661] Decoding operations
0321 14:57:02.581528 (+ 24076us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:57:02.581604 (+    76us) write_op.cc:641] Partition lock acquired for write op
0321 14:57:02.581625 (+    21us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:57:02.612122 (+ 30497us) tablet.cc:700] Row locks acquired
0321 14:57:02.612157 (+    35us) write_op.cc:260] PREPARE: finished
0321 14:57:02.612403 (+   246us) write_op.cc:270] Start()
0321 14:57:02.612564 (+   161us) write_op.cc:276] Timestamp: P: 1774105022612340 usec, L: 0
0321 14:57:02.612601 (+    37us) op_driver.cc:348] REPLICATION: starting
0321 14:57:02.613667 (+  1066us) log.cc:844] Serialized 32439 byte log entry
0321 14:57:02.620507 (+  6840us) op_driver.cc:464] REPLICATION: finished
0321 14:57:02.620861 (+   354us) write_op.cc:301] APPLY: starting
0321 14:57:02.620916 (+    55us) tablet.cc:1366] starting BulkCheckPresence
0321 14:57:07.102897 (+4481981us) tablet.cc:1369] finished BulkCheckPresence
0321 14:57:07.102984 (+    87us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:57:11.280853 (+4177869us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:57:11.283384 (+  2531us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:57:11.283429 (+    45us) write_op.cc:312] APPLY: finished
0321 14:57:11.288826 (+  5397us) log.cc:844] Serialized 8083 byte log entry
0321 14:57:11.289796 (+   970us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:57:11.294423 (+  4627us) write_op.cc:454] Released schema lock
0321 14:57:11.294803 (+   380us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":141,"cfile_cache_hit":4032,"cfile_cache_hit_bytes":5388768,"num_ops":1008,"prepare.queue_time_us":1043,"prepare.run_cpu_time_us":56787,"prepare.run_wall_time_us":56793,"replication_time_us":7808,"spinlock_wait_cycles":21120,"thread_start_us":1199,"threads_started":2,"wal-append.queue_time_us":945}]]}
W20260321 14:57:19.996785 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=15, attempt_no=0}) took 8399 ms (client timeout 9999 ms). Trace:
W20260321 14:57:19.997252 29212 rpcz_store.cc:269] 0321 14:57:11.596893 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:57:11.597091 (+   198us) service_pool.cc:225] Handling call
0321 14:57:19.996610 (+8399519us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:57:11.599896 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:57:11.600236 (+   340us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:57:11.600262 (+    26us) write_op.cc:435] Acquired schema lock
0321 14:57:11.600270 (+     8us) tablet.cc:661] Decoding operations
0321 14:57:11.618805 (+ 18535us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:57:11.618881 (+    76us) write_op.cc:641] Partition lock acquired for write op
0321 14:57:11.618900 (+    19us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:57:11.644564 (+ 25664us) tablet.cc:700] Row locks acquired
0321 14:57:11.644586 (+    22us) write_op.cc:260] PREPARE: finished
0321 14:57:11.644781 (+   195us) write_op.cc:270] Start()
0321 14:57:11.644891 (+   110us) write_op.cc:276] Timestamp: P: 1774105031644733 usec, L: 0
0321 14:57:11.644911 (+    20us) op_driver.cc:348] REPLICATION: starting
0321 14:57:11.645800 (+   889us) log.cc:844] Serialized 32439 byte log entry
0321 14:57:11.653165 (+  7365us) op_driver.cc:464] REPLICATION: finished
0321 14:57:11.653490 (+   325us) write_op.cc:301] APPLY: starting
0321 14:57:11.653557 (+    67us) tablet.cc:1366] starting BulkCheckPresence
0321 14:57:15.742990 (+4089433us) tablet.cc:1369] finished BulkCheckPresence
0321 14:57:15.743075 (+    85us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:57:19.982111 (+4239036us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:57:19.984570 (+  2459us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:57:19.984623 (+    53us) write_op.cc:312] APPLY: finished
0321 14:57:19.990269 (+  5646us) log.cc:844] Serialized 8083 byte log entry
0321 14:57:19.991325 (+  1056us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:57:19.995878 (+  4553us) write_op.cc:454] Released schema lock
0321 14:57:19.996240 (+   362us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":135,"cfile_cache_hit":4034,"cfile_cache_hit_bytes":5396980,"cfile_cache_miss":1,"cfile_cache_miss_bytes":4106,"lbm_read_time_us":120,"lbm_reads_lt_1ms":1,"num_ops":1008,"prepare.queue_time_us":1039,"prepare.run_cpu_time_us":46095,"prepare.run_wall_time_us":46114,"replication_time_us":8164,"spinlock_wait_cycles":27520,"thread_start_us":1206,"threads_started":2,"wal-append.queue_time_us":1085}]]}
W20260321 14:57:29.773777 29212 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=16, attempt_no=0}) took 9426 ms (client timeout 9999 ms). Trace:
W20260321 14:57:29.775527 29212 rpcz_store.cc:269] 0321 14:57:20.347983 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:57:20.348223 (+   240us) service_pool.cc:225] Handling call
0321 14:57:29.773572 (+9425349us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:57:20.350830 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:57:20.351164 (+   334us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:57:20.351192 (+    28us) write_op.cc:435] Acquired schema lock
0321 14:57:20.351199 (+     7us) tablet.cc:661] Decoding operations
0321 14:57:20.374322 (+ 23123us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:57:20.374394 (+    72us) write_op.cc:641] Partition lock acquired for write op
0321 14:57:20.374419 (+    25us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:57:20.403162 (+ 28743us) tablet.cc:700] Row locks acquired
0321 14:57:20.403186 (+    24us) write_op.cc:260] PREPARE: finished
0321 14:57:20.403352 (+   166us) write_op.cc:270] Start()
0321 14:57:20.403458 (+   106us) write_op.cc:276] Timestamp: P: 1774105040403308 usec, L: 0
0321 14:57:20.403471 (+    13us) op_driver.cc:348] REPLICATION: starting
0321 14:57:20.404366 (+   895us) log.cc:844] Serialized 32439 byte log entry
0321 14:57:20.411608 (+  7242us) op_driver.cc:464] REPLICATION: finished
0321 14:57:20.411906 (+   298us) write_op.cc:301] APPLY: starting
0321 14:57:20.412005 (+    99us) tablet.cc:1366] starting BulkCheckPresence
0321 14:57:24.924207 (+4512202us) tablet.cc:1369] finished BulkCheckPresence
0321 14:57:24.924329 (+   122us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:57:29.752584 (+4828255us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:57:29.758988 (+  6404us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:57:29.759052 (+    64us) write_op.cc:312] APPLY: finished
0321 14:57:29.766693 (+  7641us) log.cc:844] Serialized 8083 byte log entry
0321 14:57:29.767955 (+  1262us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:57:29.772730 (+  4775us) write_op.cc:454] Released schema lock
0321 14:57:29.773131 (+   401us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":141,"cfile_cache_hit":4034,"cfile_cache_hit_bytes":5396980,"cfile_cache_miss":1,"cfile_cache_miss_bytes":4106,"lbm_read_time_us":147,"lbm_reads_lt_1ms":1,"num_ops":1008,"prepare.queue_time_us":833,"prepare.run_cpu_time_us":53767,"prepare.run_wall_time_us":53772,"replication_time_us":8045,"spinlock_wait_cycles":356224,"thread_start_us":1083,"threads_started":2,"wal-append.queue_time_us":1181}]]}
W20260321 14:57:39.828433 29087 debug-util.cc:398] Leaking SignalData structure 0x7b08001037e0 after lost signal to thread 28342
W20260321 14:57:39.829969 29087 debug-util.cc:398] Leaking SignalData structure 0x7b0800103400 after lost signal to thread 29090
W20260321 14:57:39.830854 29087 debug-util.cc:398] Leaking SignalData structure 0x7b080005ed40 after lost signal to thread 29182
W20260321 14:57:40.238658 29191 meta_cache.cc:302] tablet e59190f4c3f24a398b437e1b1e164d65: replica dbf7f9b7ffe2493b8b236703cb7b8add (127.27.172.193:39785) has failed: Timed out: Write RPC to 127.27.172.193:39785 timed out after 10.000s (SENT)
W20260321 14:57:40.239586 29191 batcher.cc:441] Timed out: Failed to write batch of 1008 ops to tablet e59190f4c3f24a398b437e1b1e164d65 after 1 attempt(s): Failed to write to server: dbf7f9b7ffe2493b8b236703cb7b8add (127.27.172.193:39785): Write RPC to 127.27.172.193:39785 timed out after 10.000s (SENT)
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/predicate-test.cc:1563: Failure
Failed
Bad status: IO error: failed to flush data: error details are available via KuduSession::GetPendingErrors()
I20260321 14:57:41.104857 28339 tablet.cc:1621] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: MemRowSet was empty: no flush needed.
W20260321 14:57:41.116475 29241 rpcz_store.cc:267] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:41004 (ReqId={client: cfb633794a1d45828ae7b441ab9cbd34, seq_no=17, attempt_no=0}) took 10879 ms (client timeout 9999 ms). Trace:
W20260321 14:57:41.117273 29241 rpcz_store.cc:269] 0321 14:57:30.237410 (+     0us) service_pool.cc:168] Inserting onto call queue
0321 14:57:30.237992 (+   582us) service_pool.cc:225] Handling call
0321 14:57:41.116108 (+10878116us) inbound_call.cc:173] Queueing success response
Related trace 'op':
0321 14:57:30.241577 (+     0us) write_op.cc:183] PREPARE: starting on tablet e59190f4c3f24a398b437e1b1e164d65
0321 14:57:30.241933 (+   356us) write_op.cc:432] Acquiring schema lock in shared mode
0321 14:57:30.241961 (+    28us) write_op.cc:435] Acquired schema lock
0321 14:57:30.241969 (+     8us) tablet.cc:661] Decoding operations
0321 14:57:30.275079 (+ 33110us) write_op.cc:620] Acquiring the partition lock for write op
0321 14:57:30.275196 (+   117us) write_op.cc:641] Partition lock acquired for write op
0321 14:57:30.275221 (+    25us) tablet.cc:684] Acquiring locks for 1008 operations
0321 14:57:30.310038 (+ 34817us) tablet.cc:700] Row locks acquired
0321 14:57:30.310065 (+    27us) write_op.cc:260] PREPARE: finished
0321 14:57:30.310319 (+   254us) write_op.cc:270] Start()
0321 14:57:30.310463 (+   144us) write_op.cc:276] Timestamp: P: 1774105050310253 usec, L: 0
0321 14:57:30.310487 (+    24us) op_driver.cc:348] REPLICATION: starting
0321 14:57:30.311618 (+  1131us) log.cc:844] Serialized 32439 byte log entry
0321 14:57:30.319801 (+  8183us) op_driver.cc:464] REPLICATION: finished
0321 14:57:30.321107 (+  1306us) write_op.cc:301] APPLY: starting
0321 14:57:30.321189 (+    82us) tablet.cc:1366] starting BulkCheckPresence
0321 14:57:35.712590 (+5391401us) tablet.cc:1369] finished BulkCheckPresence
0321 14:57:35.712676 (+    86us) tablet.cc:1371] starting ApplyRowOperation cycle
0321 14:57:41.094063 (+5381387us) tablet.cc:1382] finished ApplyRowOperation cycle
0321 14:57:41.097088 (+  3025us) tablet_metrics.cc:563] ProbeStats: bloom_lookups=2016,key_file_lookups=2016,delta_file_lookups=0,mrs_lookups=0
0321 14:57:41.097157 (+    69us) write_op.cc:312] APPLY: finished
0321 14:57:41.103130 (+  5973us) log.cc:844] Serialized 8083 byte log entry
0321 14:57:41.104585 (+  1455us) write_op.cc:489] Releasing partition, row and schema locks
0321 14:57:41.114481 (+  9896us) write_op.cc:454] Released schema lock
0321 14:57:41.115317 (+   836us) write_op.cc:341] FINISH: Updating metrics
Metrics: {"child_traces":[["op",{"apply.queue_time_us":1026,"cfile_cache_hit":4034,"cfile_cache_hit_bytes":5392982,"num_ops":1008,"prepare.queue_time_us":1069,"prepare.run_cpu_time_us":70483,"prepare.run_wall_time_us":70508,"replication_time_us":9122,"spinlock_wait_cycles":22784,"thread_start_us":1928,"threads_started":3,"wal-append.queue_time_us":1196}]]}
I20260321 14:57:41.443348 28339 tablet_server.cc:179] TabletServer@127.27.172.193:0 shutting down...
I20260321 14:57:41.513790 28339 ts_tablet_manager.cc:1507] Shutting down tablet manager...
I20260321 14:57:41.514707 28339 tablet_replica.cc:333] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add: stopping tablet replica
I20260321 14:57:41.520393 28339 raft_consensus.cc:2243] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:57:41.522794 28339 raft_consensus.cc:2272] T e59190f4c3f24a398b437e1b1e164d65 P dbf7f9b7ffe2493b8b236703cb7b8add [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:57:41.618646 28339 tablet_server.cc:196] TabletServer@127.27.172.193:0 shutdown complete.
I20260321 14:57:41.655553 28339 master.cc:562] Master@127.27.172.254:35293 shutting down...
I20260321 14:57:41.709046 28339 raft_consensus.cc:2243] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 LEADER]: Raft consensus shutting down.
I20260321 14:57:41.709733 28339 raft_consensus.cc:2272] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e [term 1 FOLLOWER]: Raft consensus is shut down!
I20260321 14:57:41.710029 28339 tablet_replica.cc:333] T 00000000000000000000000000000000 P a8dda96a80f145de9917243caea6b61e: stopping tablet replica
I20260321 14:57:41.799431 28339 master.cc:584] Master@127.27.172.254:35293 shutdown complete.
I20260321 14:57:41.879897 28339 test_util.cc:182] -----------------------------------------------
I20260321 14:57:41.880146 28339 test_util.cc:183] Had failures, leaving test files at /tmp/dist-test-taskwtbfzI/test-tmp/predicate-test.1.ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings_3.1774104968283944-28339-0
[  FAILED  ] ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings/3, where GetParam() = (true, true) (83114 ms)
[----------] 1 test from ParameterizedBloomFilterPredicateTest (83128 ms total)

[----------] Global test environment tear-down
[==========] 5 tests from 3 test suites ran. (93415 ms total)
[  PASSED  ] 3 tests.
[  SKIPPED ] 1 test, listed below:
[  SKIPPED ] BloomFilterPredicateTest.TestKuduBloomFilterPredicateBenchmark
[  FAILED  ] 1 test, listed below:
[  FAILED  ] ParameterizedBloomFilterPredicateTest.TestDisabledBloomFilterWithRepeatedStrings/3, where GetParam() = (true, true)

 1 FAILED TEST
I20260321 14:57:41.959801 28339 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/codegen/compilation_manager.cc:213: suppressed but not reported on 261 messages since previous log ~90 seconds ago