Diagnosed failure

TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate: /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/integration-tests/tablet_copy-itest.cc:2164: Failure
Failed
Bad status: Timed out: Timed out waiting for number of WAL segments on tablet f2ae0bbdb8b745c78286580261322e45 on TS 0 to be 6. Found 5
I20260430 07:55:13.529330   420 external_mini_cluster-itest-base.cc:80] Found fatal failure
I20260430 07:55:13.530337   420 external_mini_cluster-itest-base.cc:86] Attempting to dump stacks of TS 0 with UUID 9a1d89563949413caa9bd86e37b2dcfc and pid 2820
************************ BEGIN STACKS **************************
[New LWP 2821]
[New LWP 2822]
[New LWP 2823]
[New LWP 2824]
[New LWP 2830]
[New LWP 2831]
[New LWP 2832]
[New LWP 2835]
[New LWP 2836]
[New LWP 2837]
[New LWP 2838]
[New LWP 2839]
[New LWP 2840]
[New LWP 2841]
[New LWP 2842]
[New LWP 2843]
[New LWP 2844]
[New LWP 2845]
[New LWP 2846]
[New LWP 2847]
[New LWP 2848]
[New LWP 2849]
[New LWP 2850]
[New LWP 2851]
[New LWP 2852]
[New LWP 2853]
[New LWP 2854]
[New LWP 2855]
[New LWP 2856]
[New LWP 2857]
[New LWP 2858]
[New LWP 2859]
[New LWP 2860]
[New LWP 2861]
[New LWP 2862]
[New LWP 2863]
[New LWP 2864]
[New LWP 2865]
[New LWP 2866]
[New LWP 2867]
[New LWP 2868]
[New LWP 2869]
[New LWP 2870]
[New LWP 2871]
[New LWP 2872]
[New LWP 2873]
[New LWP 2874]
[New LWP 2875]
[New LWP 2876]
[New LWP 2877]
[New LWP 2878]
[New LWP 2879]
[New LWP 2880]
[New LWP 2881]
[New LWP 2882]
[New LWP 2883]
[New LWP 2884]
[New LWP 2885]
[New LWP 2886]
[New LWP 2887]
[New LWP 2888]
[New LWP 2889]
[New LWP 2890]
[New LWP 2891]
[New LWP 2892]
[New LWP 2893]
[New LWP 2894]
[New LWP 2895]
[New LWP 2896]
[New LWP 2897]
[New LWP 2898]
[New LWP 2899]
[New LWP 2900]
[New LWP 2901]
[New LWP 2902]
[New LWP 2903]
[New LWP 2904]
[New LWP 2905]
[New LWP 2906]
[New LWP 2907]
[New LWP 2908]
[New LWP 2909]
[New LWP 2910]
[New LWP 2911]
[New LWP 2912]
[New LWP 2913]
[New LWP 2914]
[New LWP 2915]
[New LWP 2916]
[New LWP 2917]
[New LWP 2918]
[New LWP 2919]
[New LWP 2920]
[New LWP 2921]
[New LWP 2922]
[New LWP 2923]
[New LWP 2924]
[New LWP 2925]
[New LWP 2926]
[New LWP 2927]
[New LWP 2928]
[New LWP 2929]
[New LWP 2930]
[New LWP 2931]
[New LWP 2932]
[New LWP 2933]
[New LWP 2934]
[New LWP 2935]
[New LWP 2936]
[New LWP 2937]
[New LWP 2938]
[New LWP 2939]
[New LWP 2940]
[New LWP 2941]
[New LWP 2942]
[New LWP 2943]
[New LWP 2944]
[New LWP 2945]
[New LWP 2946]
[New LWP 2947]
[New LWP 2948]
[New LWP 3256]
[New LWP 3341]
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
0x00007efe482a9d50 in ?? ()
  Id   Target Id         Frame 
* 1    LWP 2820 "kudu"   0x00007efe482a9d50 in ?? ()
  2    LWP 2821 "kudu"   0x00007efe482a5fb9 in ?? ()
  3    LWP 2822 "kudu"   0x00007efe482a5fb9 in ?? ()
  4    LWP 2823 "kudu"   0x00007efe482a5fb9 in ?? ()
  5    LWP 2824 "kernel-watcher-" 0x00007efe482a5fb9 in ?? ()
  6    LWP 2830 "ntp client-2830" 0x00007efe482a99e2 in ?? ()
  7    LWP 2831 "file cache-evic" 0x00007efe482a5fb9 in ?? ()
  8    LWP 2832 "sq_acceptor" 0x00007efe40ddcbb9 in ?? ()
  9    LWP 2835 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  10   LWP 2836 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  11   LWP 2837 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  12   LWP 2838 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  13   LWP 2839 "MaintenanceMgr " 0x00007efe482a5ad3 in ?? ()
  14   LWP 2840 "txn-status-mana" 0x00007efe482a5fb9 in ?? ()
  15   LWP 2841 "collect_and_rem" 0x00007efe482a5fb9 in ?? ()
  16   LWP 2842 "tc-session-exp-" 0x00007efe482a5fb9 in ?? ()
  17   LWP 2843 "rpc worker-2843" 0x00007efe482a5ad3 in ?? ()
  18   LWP 2844 "rpc worker-2844" 0x00007efe482a5ad3 in ?? ()
  19   LWP 2845 "rpc worker-2845" 0x00007efe482a5ad3 in ?? ()
  20   LWP 2846 "rpc worker-2846" 0x00007efe482a5ad3 in ?? ()
  21   LWP 2847 "rpc worker-2847" 0x00007efe482a5ad3 in ?? ()
  22   LWP 2848 "rpc worker-2848" 0x00007efe482a5ad3 in ?? ()
  23   LWP 2849 "rpc worker-2849" 0x00007efe482a5ad3 in ?? ()
  24   LWP 2850 "rpc worker-2850" 0x00007efe482a5ad3 in ?? ()
  25   LWP 2851 "rpc worker-2851" 0x00007efe482a5ad3 in ?? ()
  26   LWP 2852 "rpc worker-2852" 0x00007efe482a5ad3 in ?? ()
  27   LWP 2853 "rpc worker-2853" 0x00007efe482a5ad3 in ?? ()
  28   LWP 2854 "rpc worker-2854" 0x00007efe482a5ad3 in ?? ()
  29   LWP 2855 "rpc worker-2855" 0x00007efe482a5ad3 in ?? ()
  30   LWP 2856 "rpc worker-2856" 0x00007efe482a5ad3 in ?? ()
  31   LWP 2857 "rpc worker-2857" 0x00007efe482a5ad3 in ?? ()
  32   LWP 2858 "rpc worker-2858" 0x00007efe482a5ad3 in ?? ()
  33   LWP 2859 "rpc worker-2859" 0x00007efe482a5ad3 in ?? ()
  34   LWP 2860 "rpc worker-2860" 0x00007efe482a5ad3 in ?? ()
  35   LWP 2861 "rpc worker-2861" 0x00007efe482a5ad3 in ?? ()
  36   LWP 2862 "rpc worker-2862" 0x00007efe482a5ad3 in ?? ()
  37   LWP 2863 "rpc worker-2863" 0x00007efe482a5ad3 in ?? ()
  38   LWP 2864 "rpc worker-2864" 0x00007efe482a5ad3 in ?? ()
  39   LWP 2865 "rpc worker-2865" 0x00007efe482a5ad3 in ?? ()
  40   LWP 2866 "rpc worker-2866" 0x00007efe482a5ad3 in ?? ()
  41   LWP 2867 "rpc worker-2867" 0x00007efe482a5ad3 in ?? ()
  42   LWP 2868 "rpc worker-2868" 0x00007efe482a5ad3 in ?? ()
  43   LWP 2869 "rpc worker-2869" 0x00007efe482a5ad3 in ?? ()
  44   LWP 2870 "rpc worker-2870" 0x00007efe482a5ad3 in ?? ()
  45   LWP 2871 "rpc worker-2871" 0x00007efe482a5ad3 in ?? ()
  46   LWP 2872 "rpc worker-2872" 0x00007efe482a5ad3 in ?? ()
  47   LWP 2873 "rpc worker-2873" 0x00007efe482a5ad3 in ?? ()
  48   LWP 2874 "rpc worker-2874" 0x00007efe482a5ad3 in ?? ()
  49   LWP 2875 "rpc worker-2875" 0x00007efe482a5ad3 in ?? ()
  50   LWP 2876 "rpc worker-2876" 0x00007efe482a5ad3 in ?? ()
  51   LWP 2877 "rpc worker-2877" 0x00007efe482a5ad3 in ?? ()
  52   LWP 2878 "rpc worker-2878" 0x00007efe482a5ad3 in ?? ()
  53   LWP 2879 "rpc worker-2879" 0x00007efe482a5ad3 in ?? ()
  54   LWP 2880 "rpc worker-2880" 0x00007efe482a5ad3 in ?? ()
  55   LWP 2881 "rpc worker-2881" 0x00007efe482a5ad3 in ?? ()
  56   LWP 2882 "rpc worker-2882" 0x00007efe482a5ad3 in ?? ()
  57   LWP 2883 "rpc worker-2883" 0x00007efe482a5ad3 in ?? ()
  58   LWP 2884 "rpc worker-2884" 0x00007efe482a5ad3 in ?? ()
  59   LWP 2885 "rpc worker-2885" 0x00007efe482a5ad3 in ?? ()
  60   LWP 2886 "rpc worker-2886" 0x00007efe482a5ad3 in ?? ()
  61   LWP 2887 "rpc worker-2887" 0x00007efe482a5ad3 in ?? ()
  62   LWP 2888 "rpc worker-2888" 0x00007efe482a5ad3 in ?? ()
  63   LWP 2889 "rpc worker-2889" 0x00007efe482a5ad3 in ?? ()
  64   LWP 2890 "rpc worker-2890" 0x00007efe482a5ad3 in ?? ()
  65   LWP 2891 "rpc worker-2891" 0x00007efe482a5ad3 in ?? ()
  66   LWP 2892 "rpc worker-2892" 0x00007efe482a5ad3 in ?? ()
  67   LWP 2893 "rpc worker-2893" 0x00007efe482a5ad3 in ?? ()
  68   LWP 2894 "rpc worker-2894" 0x00007efe482a5ad3 in ?? ()
  69   LWP 2895 "rpc worker-2895" 0x00007efe482a5ad3 in ?? ()
  70   LWP 2896 "rpc worker-2896" 0x00007efe482a5ad3 in ?? ()
  71   LWP 2897 "rpc worker-2897" 0x00007efe482a5ad3 in ?? ()
  72   LWP 2898 "rpc worker-2898" 0x00007efe482a5ad3 in ?? ()
  73   LWP 2899 "rpc worker-2899" 0x00007efe482a5ad3 in ?? ()
  74   LWP 2900 "rpc worker-2900" 0x00007efe482a5ad3 in ?? ()
  75   LWP 2901 "rpc worker-2901" 0x00007efe482a5ad3 in ?? ()
  76   LWP 2902 "rpc worker-2902" 0x00007efe482a5ad3 in ?? ()
  77   LWP 2903 "rpc worker-2903" 0x00007efe482a5ad3 in ?? ()
  78   LWP 2904 "rpc worker-2904" 0x00007efe482a5ad3 in ?? ()
  79   LWP 2905 "rpc worker-2905" 0x00007efe482a5ad3 in ?? ()
  80   LWP 2906 "rpc worker-2906" 0x00007efe482a5ad3 in ?? ()
  81   LWP 2907 "rpc worker-2907" 0x00007efe482a5ad3 in ?? ()
  82   LWP 2908 "rpc worker-2908" 0x00007efe482a5ad3 in ?? ()
  83   LWP 2909 "rpc worker-2909" 0x00007efe482a5ad3 in ?? ()
  84   LWP 2910 "rpc worker-2910" 0x00007efe482a5ad3 in ?? ()
  85   LWP 2911 "rpc worker-2911" 0x00007efe482a5ad3 in ?? ()
  86   LWP 2912 "rpc worker-2912" 0x00007efe482a5ad3 in ?? ()
  87   LWP 2913 "rpc worker-2913" 0x00007efe482a5ad3 in ?? ()
  88   LWP 2914 "rpc worker-2914" 0x00007efe482a5ad3 in ?? ()
  89   LWP 2915 "rpc worker-2915" 0x00007efe482a5ad3 in ?? ()
  90   LWP 2916 "rpc worker-2916" 0x00007efe482a5ad3 in ?? ()
  91   LWP 2917 "rpc worker-2917" 0x00007efe482a5ad3 in ?? ()
  92   LWP 2918 "rpc worker-2918" 0x00007efe482a5ad3 in ?? ()
  93   LWP 2919 "rpc worker-2919" 0x00007efe482a5ad3 in ?? ()
  94   LWP 2920 "rpc worker-2920" 0x00007efe482a5ad3 in ?? ()
  95   LWP 2921 "rpc worker-2921" 0x00007efe482a5ad3 in ?? ()
  96   LWP 2922 "rpc worker-2922" 0x00007efe482a5ad3 in ?? ()
  97   LWP 2923 "rpc worker-2923" 0x00007efe482a5ad3 in ?? ()
  98   LWP 2924 "rpc worker-2924" 0x00007efe482a5ad3 in ?? ()
  99   LWP 2925 "rpc worker-2925" 0x00007efe482a5ad3 in ?? ()
  100  LWP 2926 "rpc worker-2926" 0x00007efe482a5ad3 in ?? ()
  101  LWP 2927 "rpc worker-2927" 0x00007efe482a5ad3 in ?? ()
  102  LWP 2928 "rpc worker-2928" 0x00007efe482a5ad3 in ?? ()
  103  LWP 2929 "rpc worker-2929" 0x00007efe482a5ad3 in ?? ()
  104  LWP 2930 "rpc worker-2930" 0x00007efe482a5ad3 in ?? ()
  105  LWP 2931 "rpc worker-2931" 0x00007efe482a5ad3 in ?? ()
  106  LWP 2932 "rpc worker-2932" 0x00007efe482a5ad3 in ?? ()
  107  LWP 2933 "rpc worker-2933" 0x00007efe482a5ad3 in ?? ()
  108  LWP 2934 "rpc worker-2934" 0x00007efe482a5ad3 in ?? ()
  109  LWP 2935 "rpc worker-2935" 0x00007efe482a5ad3 in ?? ()
  110  LWP 2936 "rpc worker-2936" 0x00007efe482a5ad3 in ?? ()
  111  LWP 2937 "rpc worker-2937" 0x00007efe482a5ad3 in ?? ()
  112  LWP 2938 "rpc worker-2938" 0x00007efe482a5ad3 in ?? ()
  113  LWP 2939 "rpc worker-2939" 0x00007efe482a5ad3 in ?? ()
  114  LWP 2940 "rpc worker-2940" 0x00007efe482a5ad3 in ?? ()
  115  LWP 2941 "rpc worker-2941" 0x00007efe482a5ad3 in ?? ()
  116  LWP 2942 "rpc worker-2942" 0x00007efe482a5ad3 in ?? ()
  117  LWP 2943 "diag-logger-294" 0x00007efe482a5fb9 in ?? ()
  118  LWP 2944 "result-tracker-" 0x00007efe482a5fb9 in ?? ()
  119  LWP 2945 "excess-log-dele" 0x00007efe482a5fb9 in ?? ()
  120  LWP 2946 "acceptor-2946" 0x00007efe40deafc7 in ?? ()
  121  LWP 2947 "heartbeat-2947" 0x00007efe482a5fb9 in ?? ()
  122  LWP 2948 "maintenance_sch" 0x00007efe482a5fb9 in ?? ()
  123  LWP 3256 "wal-append [wor" 0x00007efe482a5fb9 in ?? ()
  124  LWP 3341 "raft [worker]-3" 0x00007efe482a5fb9 in ?? ()

Thread 124 (LWP 3341):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000045e0360e in ?? ()
#2  0x00000000000002a6 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x00007efe39db7bc0 in ?? ()
#5  0x00007efe39db7850 in ?? ()
#6  0x000000000000054c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 123 (LWP 3256):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00006020000b92f8 in ?? ()
#2  0x00000000000012c4 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061a00004ffb0 in ?? ()
#5  0x00007efdfaf8b130 in ?? ()
#6  0x0000000000002588 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 122 (LWP 2948):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efdfd0c9700 in ?? ()
#2  0x0000000000000085 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000616000016ca0 in ?? ()
#5  0x00007efdfd0c9750 in ?? ()
#6  0x000000000000010a in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 121 (LWP 2947):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x4b5301aec691978b in ?? ()
#2  0x0000000000000023 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061300001d840 in ?? ()
#5  0x00007efdfd8e1610 in ?? ()
#6  0x0000000000000046 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 120 (LWP 2946):
#0  0x00007efe40deafc7 in ?? ()
#1  0x00006140000224a8 in ?? ()
#2  0x00007efdfe108d70 in ?? ()
#3  0x00007efdfe108da0 in ?? ()
#4  0x00007efdfe108ea0 in ?? ()
#5  0x00007efdfe108d90 in ?? ()
#6  0x00007efdfe108e00 in ?? ()
#7  0x0000000000000080 in ?? ()
#8  0x00000000008d957b in __sanitizer::theDepot ()
#9  0x0000000500000014 in ?? ()
#10 0x00007efdfe108f20 in ?? ()
#11 0x00007efdfe10865c in ?? ()
#12 0x00000032fe1085d0 in ?? ()
#13 0x00007efdfd90c000 in ?? ()
#14 0x0000000000000000 in ?? ()

Thread 119 (LWP 2945):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efdfe922f60 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 118 (LWP 2944):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efdff13b120 in ?? ()
#2  0x0000000000000021 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061100008caf0 in ?? ()
#5  0x00007efdff13b110 in ?? ()
#6  0x0000000000000042 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 117 (LWP 2943):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 116 (LWP 2942):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 115 (LWP 2941):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 114 (LWP 2940):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 113 (LWP 2939):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 112 (LWP 2938):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 111 (LWP 2937):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 110 (LWP 2936):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 109 (LWP 2935):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 108 (LWP 2934):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 107 (LWP 2933):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 106 (LWP 2932):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 105 (LWP 2931):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 104 (LWP 2930):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 103 (LWP 2929):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 102 (LWP 2928):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 101 (LWP 2927):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 100 (LWP 2926):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 99 (LWP 2925):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 98 (LWP 2924):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 97 (LWP 2923):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 96 (LWP 2922):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 95 (LWP 2921):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 94 (LWP 2920):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 93 (LWP 2919):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 92 (LWP 2918):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 91 (LWP 2917):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 90 (LWP 2916):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 89 (LWP 2915):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 88 (LWP 2914):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 87 (LWP 2913):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 86 (LWP 2912):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 85 (LWP 2911):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 84 (LWP 2910):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 83 (LWP 2909):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 82 (LWP 2908):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 81 (LWP 2907):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 80 (LWP 2906):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 79 (LWP 2905):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 78 (LWP 2904):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 77 (LWP 2903):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 76 (LWP 2902):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000006 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0001a6888 in ?? ()
#4  0x00007efe1453beb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe1453bed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 75 (LWP 2901):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 74 (LWP 2900):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 73 (LWP 2899):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 72 (LWP 2898):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 71 (LWP 2897):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 70 (LWP 2896):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 69 (LWP 2895):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 68 (LWP 2894):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 67 (LWP 2893):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 66 (LWP 2892):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 65 (LWP 2891):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 64 (LWP 2890):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 63 (LWP 2889):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 62 (LWP 2888):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 61 (LWP 2887):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 60 (LWP 2886):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 59 (LWP 2885):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 58 (LWP 2884):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 57 (LWP 2883):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 56 (LWP 2882):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00012003c in ?? ()
#4  0x00007efe1e71ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe1e71ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00011fff0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe1e71ced0 in ?? ()
#11 0x00007efe1e71ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 55 (LWP 2881):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 54 (LWP 2880):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 53 (LWP 2879):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 52 (LWP 2878):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 51 (LWP 2877):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 50 (LWP 2876):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 49 (LWP 2875):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 48 (LWP 2874):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 47 (LWP 2873):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 46 (LWP 2872):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 45 (LWP 2871):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 44 (LWP 2870):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 43 (LWP 2869):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 42 (LWP 2868):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 41 (LWP 2867):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 40 (LWP 2866):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 39 (LWP 2865):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 38 (LWP 2864):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 37 (LWP 2863):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 36 (LWP 2862):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000003 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00009ffec in ?? ()
#4  0x00007efe288fdeb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe288fded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00009ffa0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe288fded0 in ?? ()
#11 0x00007efe288fde90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 35 (LWP 2861):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0000967fc in ?? ()
#4  0x00007efe29115eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe29115ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000967b0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe29115ed0 in ?? ()
#11 0x00007efe29115e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 34 (LWP 2860):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x00000000000001b6 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d00008fff8 in ?? ()
#4  0x00007efe2992deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2992ded0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 33 (LWP 2859):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d00008d008 in ?? ()
#4  0x00007efe2a145eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2a145ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 32 (LWP 2858):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008680c in ?? ()
#4  0x00007efe2a95deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2a95ded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000867c0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe2a95ded0 in ?? ()
#11 0x00007efe2a95de90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 31 (LWP 2857):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000211 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008000c in ?? ()
#4  0x00007efe2b175eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2b175ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00007ffc0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe2b175ed0 in ?? ()
#11 0x00007efe2b175e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 30 (LWP 2856):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00007681c in ?? ()
#4  0x00007efe2b98deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2b98ded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000767d0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe2b98ded0 in ?? ()
#11 0x00007efe2b98de90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 29 (LWP 2855):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x00000000000005d4 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d000070018 in ?? ()
#4  0x00007efe2c1a5eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2c1a5ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 28 (LWP 2854):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 27 (LWP 2853):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 26 (LWP 2852):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 25 (LWP 2851):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 24 (LWP 2850):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 23 (LWP 2849):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 22 (LWP 2848):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 21 (LWP 2847):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 20 (LWP 2846):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 19 (LWP 2845):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 18 (LWP 2844):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 17 (LWP 2843):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 16 (LWP 2842):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe32b01ce0 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000613000020060 in ?? ()
#5  0x00007efe32b01cd0 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 15 (LWP 2841):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x4008000000000000 in ?? ()
#2  0x0000000000000006 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200001fe98 in ?? ()
#5  0x00007efe33329270 in ?? ()
#6  0x000000000000000c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 14 (LWP 2840):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe33b3f260 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061800000c9a8 in ?? ()
#5  0x00007efe33b3f250 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 13 (LWP 2839):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 12 (LWP 2838):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe34b8e340 in ?? ()
#2  0x000061a00000c680 in ?? ()
#3  0x00007efe34b8e330 in ?? ()
#4  0x00007efe34b8e540 in ?? ()
#5  0x00007efe34b8e380 in ?? ()
#6  0x0000614000022698 in ?? ()
#7  0x00007efe34b8e400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3fb979842e088000 in ?? ()
#10 0x000061a00000c680 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c680 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000fe046969c80 in ?? ()
#17 0x00007efe34b8e3e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x00007efe34b8e3b0 in ?? ()
#20 0x3fb979842e088000 in ?? ()
#21 0x0000000034b8e400 in ?? ()
#22 0x000061a00000c680 in ?? ()
#23 0x0000614000022698 in ?? ()
#24 0x3fb979842e088000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 11 (LWP 2837):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe353a5340 in ?? ()
#2  0x000061a00000c080 in ?? ()
#3  0x00007efe353a5330 in ?? ()
#4  0x00007efe353a5540 in ?? ()
#5  0x00007efe353a5380 in ?? ()
#6  0x0000614000022498 in ?? ()
#7  0x00007efe353a5400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3fb96a1e49fe0000 in ?? ()
#10 0x000061a00000c080 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c080 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff283 in ?? ()
#16 0x00000fe046a6ca80 in ?? ()
#17 0x00007efe353a53e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x00007efe353a53b0 in ?? ()
#20 0x3fb96a1e49fe0000 in ?? ()
#21 0x00000000353a5400 in ?? ()
#22 0x000061a00000c080 in ?? ()
#23 0x0000614000022498 in ?? ()
#24 0x3fb96a1e49fe0000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 10 (LWP 2836):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe35bb4340 in ?? ()
#2  0x000061a00000ba80 in ?? ()
#3  0x00007efe35bb4330 in ?? ()
#4  0x00007efe35bb4540 in ?? ()
#5  0x00007efe35bb4380 in ?? ()
#6  0x0000614000022298 in ?? ()
#7  0x00007efe35bb4400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3f91346fc0c64000 in ?? ()
#10 0x000061a00000ba80 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000ba80 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff288 in ?? ()
#16 0x00000fe046b6e880 in ?? ()
#17 0x00007efe35bb43e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x00007efe35bb43b0 in ?? ()
#20 0x3f91346fc0c64000 in ?? ()
#21 0x0000000035bb4400 in ?? ()
#22 0x000061a00000ba80 in ?? ()
#23 0x0000614000022298 in ?? ()
#24 0x3f91346fc0c64000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 9 (LWP 2835):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe37bad340 in ?? ()
#2  0x000061a00000b480 in ?? ()
#3  0x00007efe37bad330 in ?? ()
#4  0x00007efe37bad540 in ?? ()
#5  0x00007efe37bad380 in ?? ()
#6  0x0000614000022098 in ?? ()
#7  0x00007efe37bad400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3fb95f3305323000 in ?? ()
#10 0x000061a00000b480 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000b480 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000fe046f6da80 in ?? ()
#17 0x00007efe37bad3e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x0000000000000000 in ?? ()

Thread 8 (LWP 2832):
#0  0x00007efe40ddcbb9 in ?? ()
#1  0x00000000000000c8 in ?? ()
#2  0x00007efe395b77b8 in ?? ()
#3  0x000060200001e750 in ?? ()
#4  0x0000000000000002 in ?? ()
#5  0x00000000000000c8 in ?? ()
#6  0x00000000008d11c1 in __sanitizer::theDepot ()
#7  0x0000000000000000 in ?? ()

Thread 7 (LWP 2831):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe38db60e0 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 6 (LWP 2830):
#0  0x00007efe482a99e2 in ?? ()
#1  0x00007efe385b3bc0 in ?? ()
#2  0x00007efe385b3c60 in ?? ()
#3  0x000000000000001e in ?? ()
#4  0x0000000000000030 in ?? ()
#5  0x00007efe385b3c10 in ?? ()
#6  0x00000000017d0860 in ?? ()
#7  0x00007efe385b3c70 in ?? ()
#8  0x000061100008d450 in ?? ()
#9  0x00007efe385b3c60 in ?? ()
#10 0x00000000008cb6b7 in __sanitizer::theDepot ()
#11 0x00007efe4dc52bfc in ?? ()
#12 0x00007efe4dc42209 in ?? ()
#13 0x00007efe4dc467f6 in ?? ()
#14 0x00007efe4dc4b230 in ?? ()
#15 0x00007efe4dc4b059 in ?? ()
#16 0x0000000000aa4cad in __sanitizer::theDepot ()
#17 0x00007efe44cd1529 in ?? ()
#18 0x00007efe4829f6db in ?? ()
#19 0x00000fe0470ae688 in ?? ()
#20 0x00007efe385b3460 in ?? ()
#21 0x0000000000000000 in ?? ()

Thread 5 (LWP 2824):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe3a5b8ca0 in ?? ()
#2  0x00000000000000a7 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200000c728 in ?? ()
#5  0x00007efe3a5b8c90 in ?? ()
#6  0x000000000000014e in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 4 (LWP 2823):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 3 (LWP 2822):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 2 (LWP 2821):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x5f5347414c46000a in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000612000012b38 in ?? ()
#5  0x00007efe3bdbc450 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 1 (LWP 2820):
#0  0x00007efe482a9d50 in ?? ()
#1  0x0000000000000000 in ?? ()
************************* END STACKS ***************************
I20260430 07:55:14.514652   420 external_mini_cluster-itest-base.cc:86] Attempting to dump stacks of TS 1 with UUID e1c76ca1eeb8497091405e8838166d4c and pid 2951
************************ BEGIN STACKS **************************
[New LWP 2952]
[New LWP 2953]
[New LWP 2954]
[New LWP 2955]
[New LWP 2961]
[New LWP 2962]
[New LWP 2963]
[New LWP 2966]
[New LWP 2967]
[New LWP 2968]
[New LWP 2969]
[New LWP 2970]
[New LWP 2971]
[New LWP 2972]
[New LWP 2973]
[New LWP 2974]
[New LWP 2975]
[New LWP 2976]
[New LWP 2977]
[New LWP 2978]
[New LWP 2979]
[New LWP 2980]
[New LWP 2981]
[New LWP 2982]
[New LWP 2983]
[New LWP 2984]
[New LWP 2985]
[New LWP 2986]
[New LWP 2987]
[New LWP 2988]
[New LWP 2989]
[New LWP 2990]
[New LWP 2991]
[New LWP 2992]
[New LWP 2993]
[New LWP 2994]
[New LWP 2995]
[New LWP 2996]
[New LWP 2997]
[New LWP 2998]
[New LWP 2999]
[New LWP 3000]
[New LWP 3001]
[New LWP 3002]
[New LWP 3003]
[New LWP 3004]
[New LWP 3005]
[New LWP 3006]
[New LWP 3007]
[New LWP 3008]
[New LWP 3009]
[New LWP 3010]
[New LWP 3011]
[New LWP 3012]
[New LWP 3013]
[New LWP 3014]
[New LWP 3015]
[New LWP 3016]
[New LWP 3017]
[New LWP 3018]
[New LWP 3019]
[New LWP 3020]
[New LWP 3021]
[New LWP 3022]
[New LWP 3023]
[New LWP 3024]
[New LWP 3025]
[New LWP 3026]
[New LWP 3027]
[New LWP 3028]
[New LWP 3029]
[New LWP 3030]
[New LWP 3031]
[New LWP 3032]
[New LWP 3033]
[New LWP 3034]
[New LWP 3035]
[New LWP 3036]
[New LWP 3037]
[New LWP 3038]
[New LWP 3039]
[New LWP 3040]
[New LWP 3041]
[New LWP 3042]
[New LWP 3043]
[New LWP 3044]
[New LWP 3045]
[New LWP 3046]
[New LWP 3047]
[New LWP 3048]
[New LWP 3049]
[New LWP 3050]
[New LWP 3051]
[New LWP 3052]
[New LWP 3053]
[New LWP 3054]
[New LWP 3055]
[New LWP 3056]
[New LWP 3057]
[New LWP 3058]
[New LWP 3059]
[New LWP 3060]
[New LWP 3061]
[New LWP 3062]
[New LWP 3063]
[New LWP 3064]
[New LWP 3065]
[New LWP 3066]
[New LWP 3067]
[New LWP 3068]
[New LWP 3069]
[New LWP 3070]
[New LWP 3071]
[New LWP 3072]
[New LWP 3073]
[New LWP 3074]
[New LWP 3075]
[New LWP 3076]
[New LWP 3077]
[New LWP 3078]
[New LWP 3079]
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
0x00007f2243909d50 in ?? ()
  Id   Target Id         Frame 
* 1    LWP 2951 "kudu"   0x00007f2243909d50 in ?? ()
  2    LWP 2952 "kudu"   0x00007f2243905fb9 in ?? ()
  3    LWP 2953 "kudu"   0x00007f2243905fb9 in ?? ()
  4    LWP 2954 "kudu"   0x00007f2243905fb9 in ?? ()
  5    LWP 2955 "kernel-watcher-" 0x00007f2243905fb9 in ?? ()
  6    LWP 2961 "ntp client-2961" 0x00007f22439099e2 in ?? ()
  7    LWP 2962 "file cache-evic" 0x00007f2243905fb9 in ?? ()
  8    LWP 2963 "sq_acceptor" 0x00007f223c43cbb9 in ?? ()
  9    LWP 2966 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  10   LWP 2967 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  11   LWP 2968 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  12   LWP 2969 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  13   LWP 2970 "MaintenanceMgr " 0x00007f2243905ad3 in ?? ()
  14   LWP 2971 "txn-status-mana" 0x00007f2243905fb9 in ?? ()
  15   LWP 2972 "collect_and_rem" 0x00007f2243905fb9 in ?? ()
  16   LWP 2973 "tc-session-exp-" 0x00007f2243905fb9 in ?? ()
  17   LWP 2974 "rpc worker-2974" 0x00007f2243905ad3 in ?? ()
  18   LWP 2975 "rpc worker-2975" 0x00007f2243905ad3 in ?? ()
  19   LWP 2976 "rpc worker-2976" 0x00007f2243905ad3 in ?? ()
  20   LWP 2977 "rpc worker-2977" 0x00007f2243905ad3 in ?? ()
  21   LWP 2978 "rpc worker-2978" 0x00007f2243905ad3 in ?? ()
  22   LWP 2979 "rpc worker-2979" 0x00007f2243905ad3 in ?? ()
  23   LWP 2980 "rpc worker-2980" 0x00007f2243905ad3 in ?? ()
  24   LWP 2981 "rpc worker-2981" 0x00007f2243905ad3 in ?? ()
  25   LWP 2982 "rpc worker-2982" 0x00007f2243905ad3 in ?? ()
  26   LWP 2983 "rpc worker-2983" 0x00007f2243905ad3 in ?? ()
  27   LWP 2984 "rpc worker-2984" 0x00007f2243905ad3 in ?? ()
  28   LWP 2985 "rpc worker-2985" 0x00007f2243905ad3 in ?? ()
  29   LWP 2986 "rpc worker-2986" 0x00007f2243905ad3 in ?? ()
  30   LWP 2987 "rpc worker-2987" 0x00007f2243905ad3 in ?? ()
  31   LWP 2988 "rpc worker-2988" 0x00007f2243905ad3 in ?? ()
  32   LWP 2989 "rpc worker-2989" 0x00007f2243905ad3 in ?? ()
  33   LWP 2990 "rpc worker-2990" 0x00007f2243905ad3 in ?? ()
  34   LWP 2991 "rpc worker-2991" 0x00007f2243905ad3 in ?? ()
  35   LWP 2992 "rpc worker-2992" 0x00007f2243905ad3 in ?? ()
  36   LWP 2993 "rpc worker-2993" 0x00007f2243905ad3 in ?? ()
  37   LWP 2994 "rpc worker-2994" 0x00007f2243905ad3 in ?? ()
  38   LWP 2995 "rpc worker-2995" 0x00007f2243905ad3 in ?? ()
  39   LWP 2996 "rpc worker-2996" 0x00007f2243905ad3 in ?? ()
  40   LWP 2997 "rpc worker-2997" 0x00007f2243905ad3 in ?? ()
  41   LWP 2998 "rpc worker-2998" 0x00007f2243905ad3 in ?? ()
  42   LWP 2999 "rpc worker-2999" 0x00007f2243905ad3 in ?? ()
  43   LWP 3000 "rpc worker-3000" 0x00007f2243905ad3 in ?? ()
  44   LWP 3001 "rpc worker-3001" 0x00007f2243905ad3 in ?? ()
  45   LWP 3002 "rpc worker-3002" 0x00007f2243905ad3 in ?? ()
  46   LWP 3003 "rpc worker-3003" 0x00007f2243905ad3 in ?? ()
  47   LWP 3004 "rpc worker-3004" 0x00007f2243905ad3 in ?? ()
  48   LWP 3005 "rpc worker-3005" 0x00007f2243905ad3 in ?? ()
  49   LWP 3006 "rpc worker-3006" 0x00007f2243905ad3 in ?? ()
  50   LWP 3007 "rpc worker-3007" 0x00007f2243905ad3 in ?? ()
  51   LWP 3008 "rpc worker-3008" 0x00007f2243905ad3 in ?? ()
  52   LWP 3009 "rpc worker-3009" 0x00007f2243905ad3 in ?? ()
  53   LWP 3010 "rpc worker-3010" 0x00007f2243905ad3 in ?? ()
  54   LWP 3011 "rpc worker-3011" 0x00007f2243905ad3 in ?? ()
  55   LWP 3012 "rpc worker-3012" 0x00007f2243905ad3 in ?? ()
  56   LWP 3013 "rpc worker-3013" 0x00007f2243905ad3 in ?? ()
  57   LWP 3014 "rpc worker-3014" 0x00007f2243905ad3 in ?? ()
  58   LWP 3015 "rpc worker-3015" 0x00007f2243905ad3 in ?? ()
  59   LWP 3016 "rpc worker-3016" 0x00007f2243905ad3 in ?? ()
  60   LWP 3017 "rpc worker-3017" 0x00007f2243905ad3 in ?? ()
  61   LWP 3018 "rpc worker-3018" 0x00007f2243905ad3 in ?? ()
  62   LWP 3019 "rpc worker-3019" 0x00007f2243905ad3 in ?? ()
  63   LWP 3020 "rpc worker-3020" 0x00007f2243905ad3 in ?? ()
  64   LWP 3021 "rpc worker-3021" 0x00007f2243905ad3 in ?? ()
  65   LWP 3022 "rpc worker-3022" 0x00007f2243905ad3 in ?? ()
  66   LWP 3023 "rpc worker-3023" 0x00007f2243905ad3 in ?? ()
  67   LWP 3024 "rpc worker-3024" 0x00007f2243905ad3 in ?? ()
  68   LWP 3025 "rpc worker-3025" 0x00007f2243905ad3 in ?? ()
  69   LWP 3026 "rpc worker-3026" 0x00007f2243905ad3 in ?? ()
  70   LWP 3027 "rpc worker-3027" 0x00007f2243905ad3 in ?? ()
  71   LWP 3028 "rpc worker-3028" 0x00007f2243905ad3 in ?? ()
  72   LWP 3029 "rpc worker-3029" 0x00007f2243905ad3 in ?? ()
  73   LWP 3030 "rpc worker-3030" 0x00007f2243905ad3 in ?? ()
  74   LWP 3031 "rpc worker-3031" 0x00007f2243905ad3 in ?? ()
  75   LWP 3032 "rpc worker-3032" 0x00007f2243905ad3 in ?? ()
  76   LWP 3033 "rpc worker-3033" 0x00007f2243905ad3 in ?? ()
  77   LWP 3034 "rpc worker-3034" 0x00007f2243905ad3 in ?? ()
  78   LWP 3035 "rpc worker-3035" 0x00007f2243905ad3 in ?? ()
  79   LWP 3036 "rpc worker-3036" 0x00007f2243905ad3 in ?? ()
  80   LWP 3037 "rpc worker-3037" 0x00007f2243905ad3 in ?? ()
  81   LWP 3038 "rpc worker-3038" 0x00007f2243905ad3 in ?? ()
  82   LWP 3039 "rpc worker-3039" 0x00007f2243905ad3 in ?? ()
  83   LWP 3040 "rpc worker-3040" 0x00007f2243905ad3 in ?? ()
  84   LWP 3041 "rpc worker-3041" 0x00007f2243905ad3 in ?? ()
  85   LWP 3042 "rpc worker-3042" 0x00007f2243905ad3 in ?? ()
  86   LWP 3043 "rpc worker-3043" 0x00007f2243905ad3 in ?? ()
  87   LWP 3044 "rpc worker-3044" 0x00007f2243905ad3 in ?? ()
  88   LWP 3045 "rpc worker-3045" 0x00007f2243905ad3 in ?? ()
  89   LWP 3046 "rpc worker-3046" 0x00007f2243905ad3 in ?? ()
  90   LWP 3047 "rpc worker-3047" 0x00007f2243905ad3 in ?? ()
  91   LWP 3048 "rpc worker-3048" 0x00007f2243905ad3 in ?? ()
  92   LWP 3049 "rpc worker-3049" 0x00007f2243905ad3 in ?? ()
  93   LWP 3050 "rpc worker-3050" 0x00007f2243905ad3 in ?? ()
  94   LWP 3051 "rpc worker-3051" 0x00007f2243905ad3 in ?? ()
  95   LWP 3052 "rpc worker-3052" 0x00007f2243905ad3 in ?? ()
  96   LWP 3053 "rpc worker-3053" 0x00007f2243905ad3 in ?? ()
  97   LWP 3054 "rpc worker-3054" 0x00007f2243905ad3 in ?? ()
  98   LWP 3055 "rpc worker-3055" 0x00007f2243905ad3 in ?? ()
  99   LWP 3056 "rpc worker-3056" 0x00007f2243905ad3 in ?? ()
  100  LWP 3057 "rpc worker-3057" 0x00007f2243905ad3 in ?? ()
  101  LWP 3058 "rpc worker-3058" 0x00007f2243905ad3 in ?? ()
  102  LWP 3059 "rpc worker-3059" 0x00007f2243905ad3 in ?? ()
  103  LWP 3060 "rpc worker-3060" 0x00007f2243905ad3 in ?? ()
  104  LWP 3061 "rpc worker-3061" 0x00007f2243905ad3 in ?? ()
  105  LWP 3062 "rpc worker-3062" 0x00007f2243905ad3 in ?? ()
  106  LWP 3063 "rpc worker-3063" 0x00007f2243905ad3 in ?? ()
  107  LWP 3064 "rpc worker-3064" 0x00007f2243905ad3 in ?? ()
  108  LWP 3065 "rpc worker-3065" 0x00007f2243905ad3 in ?? ()
  109  LWP 3066 "rpc worker-3066" 0x00007f2243905ad3 in ?? ()
  110  LWP 3067 "rpc worker-3067" 0x00007f2243905ad3 in ?? ()
  111  LWP 3068 "rpc worker-3068" 0x00007f2243905ad3 in ?? ()
  112  LWP 3069 "rpc worker-3069" 0x00007f2243905ad3 in ?? ()
  113  LWP 3070 "rpc worker-3070" 0x00007f2243905ad3 in ?? ()
  114  LWP 3071 "rpc worker-3071" 0x00007f2243905ad3 in ?? ()
  115  LWP 3072 "rpc worker-3072" 0x00007f2243905ad3 in ?? ()
  116  LWP 3073 "rpc worker-3073" 0x00007f2243905ad3 in ?? ()
  117  LWP 3074 "diag-logger-307" 0x00007f2243905fb9 in ?? ()
  118  LWP 3075 "result-tracker-" 0x00007f2243905fb9 in ?? ()
  119  LWP 3076 "excess-log-dele" 0x00007f2243905fb9 in ?? ()
  120  LWP 3077 "acceptor-3077" 0x00007f223c44afc7 in ?? ()
  121  LWP 3078 "heartbeat-3078" 0x00007f2243905fb9 in ?? ()
  122  LWP 3079 "maintenance_sch" 0x00007f2243905fb9 in ?? ()

Thread 122 (LWP 3079):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f21f8729700 in ?? ()
#2  0x0000000000000086 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000616000016ca0 in ?? ()
#5  0x00007f21f8729750 in ?? ()
#6  0x000000000000010c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 121 (LWP 3078):
#0  0x00007f2243905fb9 in ?? ()
#1  0x4b5301aec691978b in ?? ()
#2  0x0000000000000025 in ?? ()
#3  0x0000000100000081 in ?? ()
#4  0x000061300001d844 in ?? ()
#5  0x00007f21f8f41610 in ?? ()
#6  0x000000000000004b in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x00007f21f8f41630 in ?? ()
#9  0x0000000200000000 in ?? ()
#10 0x0000008000000189 in ?? ()
#11 0x00007f21f8f416b0 in ?? ()
#12 0x00000fe43f1e82d8 in ?? ()
#13 0x0000000000000000 in ?? ()

Thread 120 (LWP 3077):
#0  0x00007f223c44afc7 in ?? ()
#1  0x00006140000224a8 in ?? ()
#2  0x00007f21f9768d70 in ?? ()
#3  0x00007f21f9768da0 in ?? ()
#4  0x00007f21f9768ea0 in ?? ()
#5  0x00007f21f9768d90 in ?? ()
#6  0x00007f21f9768e00 in ?? ()
#7  0x0000000000000080 in ?? ()
#8  0x00000000008d957b in __sanitizer::theDepot ()
#9  0x0000000500000014 in ?? ()
#10 0x00007f21f9768f20 in ?? ()
#11 0x00007f21f976865c in ?? ()
#12 0x00007f21f97685d0 in ?? ()
#13 0x00007f21f8f6c000 in ?? ()
#14 0x0000000000000000 in ?? ()

Thread 119 (LWP 3076):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f21f9f82f60 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 118 (LWP 3075):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f21fa79b120 in ?? ()
#2  0x0000000000000021 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061100008caf0 in ?? ()
#5  0x00007f21fa79b110 in ?? ()
#6  0x0000000000000042 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 117 (LWP 3074):
#0  0x00007f2243905fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 116 (LWP 3073):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 115 (LWP 3072):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 114 (LWP 3071):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 113 (LWP 3070):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 112 (LWP 3069):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 111 (LWP 3068):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 110 (LWP 3067):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 109 (LWP 3066):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 108 (LWP 3065):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 107 (LWP 3064):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 106 (LWP 3063):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 105 (LWP 3062):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 104 (LWP 3061):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 103 (LWP 3060):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 102 (LWP 3059):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 101 (LWP 3058):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 100 (LWP 3057):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 99 (LWP 3056):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 98 (LWP 3055):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 97 (LWP 3054):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 96 (LWP 3053):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 95 (LWP 3052):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 94 (LWP 3051):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 93 (LWP 3050):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 92 (LWP 3049):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 91 (LWP 3048):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 90 (LWP 3047):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 89 (LWP 3046):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 88 (LWP 3045):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 87 (LWP 3044):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 86 (LWP 3043):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 85 (LWP 3042):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 84 (LWP 3041):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 83 (LWP 3040):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 82 (LWP 3039):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 81 (LWP 3038):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 80 (LWP 3037):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 79 (LWP 3036):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 78 (LWP 3035):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 77 (LWP 3034):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 76 (LWP 3033):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000a57 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0001a688c in ?? ()
#4  0x00007f220fb9beb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f220fb9bed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0001a6840 in ?? ()
#9  0x00007f2243905770 in ?? ()
#10 0x00007f220fb9bed0 in ?? ()
#11 0x00007f220fb9be90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 75 (LWP 3032):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000776 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0001a0088 in ?? ()
#4  0x00007f22103b3eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f22103b3ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 74 (LWP 3031):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 73 (LWP 3030):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 72 (LWP 3029):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 71 (LWP 3028):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 70 (LWP 3027):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 69 (LWP 3026):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 68 (LWP 3025):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 67 (LWP 3024):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 66 (LWP 3023):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 65 (LWP 3022):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 64 (LWP 3021):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 63 (LWP 3020):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 62 (LWP 3019):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 61 (LWP 3018):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 60 (LWP 3017):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 59 (LWP 3016):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 58 (LWP 3015):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 57 (LWP 3014):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 56 (LWP 3013):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00012003c in ?? ()
#4  0x00007f2219d7ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f2219d7ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00011fff0 in ?? ()
#9  0x00007f2243905770 in ?? ()
#10 0x00007f2219d7ced0 in ?? ()
#11 0x00007f2219d7ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 55 (LWP 3012):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 54 (LWP 3011):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 53 (LWP 3010):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 52 (LWP 3009):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 51 (LWP 3008):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 50 (LWP 3007):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 49 (LWP 3006):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 48 (LWP 3005):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 47 (LWP 3004):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 46 (LWP 3003):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 45 (LWP 3002):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 44 (LWP 3001):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 43 (LWP 3000):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 42 (LWP 2999):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 41 (LWP 2998):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 40 (LWP 2997):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 39 (LWP 2996):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 38 (LWP 2995):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 37 (LWP 2994):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 36 (LWP 2993):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000003 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00009ffec in ?? ()
#4  0x00007f2223f5deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f2223f5ded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00009ffa0 in ?? ()
#9  0x00007f2243905770 in ?? ()
#10 0x00007f2223f5ded0 in ?? ()
#11 0x00007f2223f5de90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 35 (LWP 2992):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0000967f8 in ?? ()
#4  0x00007f2224775eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f2224775ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 34 (LWP 2991):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 33 (LWP 2990):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 32 (LWP 2989):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 31 (LWP 2988):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 30 (LWP 2987):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 29 (LWP 2986):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 28 (LWP 2985):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 27 (LWP 2984):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 26 (LWP 2983):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 25 (LWP 2982):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 24 (LWP 2981):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 23 (LWP 2980):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 22 (LWP 2979):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 21 (LWP 2978):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 20 (LWP 2977):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 19 (LWP 2976):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 18 (LWP 2975):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 17 (LWP 2974):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 16 (LWP 2973):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f222e0b8ce0 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000613000020060 in ?? ()
#5  0x00007f222e0b8cd0 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 15 (LWP 2972):
#0  0x00007f2243905fb9 in ?? ()
#1  0x4008000000000000 in ?? ()
#2  0x0000000000000006 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200001fe98 in ?? ()
#5  0x00007f222e8ba270 in ?? ()
#6  0x000000000000000c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 14 (LWP 2971):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f222f0ba260 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061800000c9a8 in ?? ()
#5  0x00007f222f0ba250 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 13 (LWP 2970):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 12 (LWP 2969):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f22300bd340 in ?? ()
#2  0x000061a00000c680 in ?? ()
#3  0x00007f22300bd330 in ?? ()
#4  0x00007f22300bd540 in ?? ()
#5  0x00007f22300bd380 in ?? ()
#6  0x0000614000022698 in ?? ()
#7  0x00007f22300bd400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb95a6a95e4b000 in ?? ()
#10 0x000061a00000c680 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c680 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff283 in ?? ()
#16 0x00000fe4c600fa80 in ?? ()
#17 0x00007f22300bd3e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x00007f22300bd3b0 in ?? ()
#20 0x3fb95a6a95e4b000 in ?? ()
#21 0x00000000300bd400 in ?? ()
#22 0x000061a00000c680 in ?? ()
#23 0x0000614000022698 in ?? ()
#24 0x3fb95a6a95e4b000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 11 (LWP 2968):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f22308be340 in ?? ()
#2  0x000061a00000c080 in ?? ()
#3  0x00007f22308be330 in ?? ()
#4  0x00007f22308be540 in ?? ()
#5  0x00007f22308be380 in ?? ()
#6  0x0000614000022498 in ?? ()
#7  0x00007f22308be400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb9586cdc327000 in ?? ()
#10 0x000061a00000c080 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c080 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000fe4c610fc80 in ?? ()
#17 0x00007f22308be3e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x00007f22308be3b0 in ?? ()
#20 0x3fb9586cdc327000 in ?? ()
#21 0x00000000308be400 in ?? ()
#22 0x000061a00000c080 in ?? ()
#23 0x0000614000022498 in ?? ()
#24 0x3fb9586cdc327000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 10 (LWP 2967):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f22310bf340 in ?? ()
#2  0x000061a00000ba80 in ?? ()
#3  0x00007f22310bf330 in ?? ()
#4  0x00007f22310bf540 in ?? ()
#5  0x00007f22310bf380 in ?? ()
#6  0x0000614000022298 in ?? ()
#7  0x00007f22310bf400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb96d3c83f25000 in ?? ()
#10 0x000061a00000ba80 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000ba80 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000fe4c620fe80 in ?? ()
#17 0x00007f22310bf3e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x00007f22310bf3b0 in ?? ()
#20 0x3fb96d3c83f25000 in ?? ()
#21 0x00000000310bf400 in ?? ()
#22 0x000061a00000ba80 in ?? ()
#23 0x0000614000022298 in ?? ()
#24 0x3fb96d3c83f25000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 9 (LWP 2966):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f2232ca3340 in ?? ()
#2  0x000061a00000b480 in ?? ()
#3  0x00007f2232ca3330 in ?? ()
#4  0x00007f2232ca3540 in ?? ()
#5  0x00007f2232ca3380 in ?? ()
#6  0x0000614000022098 in ?? ()
#7  0x00007f2232ca3400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb94abf45471000 in ?? ()
#10 0x000061a00000b480 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000b480 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000fe4c658c680 in ?? ()
#17 0x00007f2232ca33e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x0000000000000000 in ?? ()

Thread 8 (LWP 2963):
#0  0x00007f223c43cbb9 in ?? ()
#1  0x00000000000000c8 in ?? ()
#2  0x00007f2234cb77b8 in ?? ()
#3  0x000060200001f510 in ?? ()
#4  0x0000000000000002 in ?? ()
#5  0x00000000000000c8 in ?? ()
#6  0x00000000008d11c1 in __sanitizer::theDepot ()
#7  0x0000000000000000 in ?? ()

Thread 7 (LWP 2962):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f22344b60e0 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 6 (LWP 2961):
#0  0x00007f22439099e2 in ?? ()
#1  0x00007f2233cb3bc0 in ?? ()
#2  0x00007f2233cb3c60 in ?? ()
#3  0x000000000000001e in ?? ()
#4  0x0000000000000030 in ?? ()
#5  0x00007f2233cb3c10 in ?? ()
#6  0x00000000017d0860 in ?? ()
#7  0x00007f2233cb3c70 in ?? ()
#8  0x000061100008d450 in ?? ()
#9  0x00007f2233cb3c60 in ?? ()
#10 0x00000000008cb6b7 in __sanitizer::theDepot ()
#11 0x00007f22492b2bfc in ?? ()
#12 0x00007f22492a2209 in ?? ()
#13 0x00007f22492a67f6 in ?? ()
#14 0x00007f22492ab230 in ?? ()
#15 0x00007f22492ab059 in ?? ()
#16 0x0000000000aa4cad in __sanitizer::theDepot ()
#17 0x00007f2240331529 in ?? ()
#18 0x00007f22438ff6db in ?? ()
#19 0x00000fe4c678e688 in ?? ()
#20 0x00007f2233cb3460 in ?? ()
#21 0x0000000000000000 in ?? ()

Thread 5 (LWP 2955):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f2235cb8ca0 in ?? ()
#2  0x00000000000000a9 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200000c728 in ?? ()
#5  0x00007f2235cb8c90 in ?? ()
#6  0x0000000000000152 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 4 (LWP 2954):
#0  0x00007f2243905fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 3 (LWP 2953):
#0  0x00007f2243905fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 2 (LWP 2952):
#0  0x00007f2243905fb9 in ?? ()
#1  0x5f5347414c46000a in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000612000012b38 in ?? ()
#5  0x00007f22374bc450 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 1 (LWP 2951):
#0  0x00007f2243909d50 in ?? ()
#1  0x0000000000000000 in ?? ()
************************* END STACKS ***************************
I20260430 07:55:15.584566   420 external_mini_cluster-itest-base.cc:86] Attempting to dump stacks of TS 2 with UUID f0a630514a8e403c89769e0367de2852 and pid 3082
************************ BEGIN STACKS **************************
[New LWP 3083]
[New LWP 3084]
[New LWP 3085]
[New LWP 3086]
[New LWP 3092]
[New LWP 3093]
[New LWP 3094]
[New LWP 3097]
[New LWP 3098]
[New LWP 3099]
[New LWP 3100]
[New LWP 3101]
[New LWP 3102]
[New LWP 3103]
[New LWP 3104]
[New LWP 3105]
[New LWP 3106]
[New LWP 3107]
[New LWP 3108]
[New LWP 3109]
[New LWP 3110]
[New LWP 3111]
[New LWP 3112]
[New LWP 3113]
[New LWP 3114]
[New LWP 3115]
[New LWP 3116]
[New LWP 3117]
[New LWP 3118]
[New LWP 3119]
[New LWP 3120]
[New LWP 3121]
[New LWP 3122]
[New LWP 3123]
[New LWP 3124]
[New LWP 3125]
[New LWP 3126]
[New LWP 3127]
[New LWP 3128]
[New LWP 3129]
[New LWP 3130]
[New LWP 3131]
[New LWP 3132]
[New LWP 3133]
[New LWP 3134]
[New LWP 3135]
[New LWP 3136]
[New LWP 3137]
[New LWP 3138]
[New LWP 3139]
[New LWP 3140]
[New LWP 3141]
[New LWP 3142]
[New LWP 3143]
[New LWP 3144]
[New LWP 3145]
[New LWP 3146]
[New LWP 3147]
[New LWP 3148]
[New LWP 3149]
[New LWP 3150]
[New LWP 3151]
[New LWP 3152]
[New LWP 3153]
[New LWP 3154]
[New LWP 3155]
[New LWP 3156]
[New LWP 3157]
[New LWP 3158]
[New LWP 3159]
[New LWP 3160]
[New LWP 3161]
[New LWP 3162]
[New LWP 3163]
[New LWP 3164]
[New LWP 3165]
[New LWP 3166]
[New LWP 3167]
[New LWP 3168]
[New LWP 3169]
[New LWP 3170]
[New LWP 3171]
[New LWP 3172]
[New LWP 3173]
[New LWP 3174]
[New LWP 3175]
[New LWP 3176]
[New LWP 3177]
[New LWP 3178]
[New LWP 3179]
[New LWP 3180]
[New LWP 3181]
[New LWP 3182]
[New LWP 3183]
[New LWP 3184]
[New LWP 3185]
[New LWP 3186]
[New LWP 3187]
[New LWP 3188]
[New LWP 3189]
[New LWP 3190]
[New LWP 3191]
[New LWP 3192]
[New LWP 3193]
[New LWP 3194]
[New LWP 3195]
[New LWP 3196]
[New LWP 3197]
[New LWP 3198]
[New LWP 3199]
[New LWP 3200]
[New LWP 3201]
[New LWP 3202]
[New LWP 3203]
[New LWP 3204]
[New LWP 3205]
[New LWP 3206]
[New LWP 3207]
[New LWP 3208]
[New LWP 3209]
[New LWP 3210]
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
0x00007f53c4e90d50 in ?? ()
  Id   Target Id         Frame 
* 1    LWP 3082 "kudu"   0x00007f53c4e90d50 in ?? ()
  2    LWP 3083 "kudu"   0x00007f53c4e8cfb9 in ?? ()
  3    LWP 3084 "kudu"   0x00007f53c4e8cfb9 in ?? ()
  4    LWP 3085 "kudu"   0x00007f53c4e8cfb9 in ?? ()
  5    LWP 3086 "kernel-watcher-" 0x00007f53c4e8cfb9 in ?? ()
  6    LWP 3092 "ntp client-3092" 0x00007f53c4e909e2 in ?? ()
  7    LWP 3093 "file cache-evic" 0x00007f53c4e8cfb9 in ?? ()
  8    LWP 3094 "sq_acceptor" 0x00007f53bd9c3bb9 in ?? ()
  9    LWP 3097 "rpc reactor-309" 0x00007f53bd9d0947 in ?? ()
  10   LWP 3098 "rpc reactor-309" 0x00007f53bd9d0947 in ?? ()
  11   LWP 3099 "rpc reactor-309" 0x00007f53bd9d0947 in ?? ()
  12   LWP 3100 "rpc reactor-310" 0x00007f53bd9d0947 in ?? ()
  13   LWP 3101 "MaintenanceMgr " 0x00007f53c4e8cad3 in ?? ()
  14   LWP 3102 "txn-status-mana" 0x00007f53c4e8cfb9 in ?? ()
  15   LWP 3103 "collect_and_rem" 0x00007f53c4e8cfb9 in ?? ()
  16   LWP 3104 "tc-session-exp-" 0x00007f53c4e8cfb9 in ?? ()
  17   LWP 3105 "rpc worker-3105" 0x00007f53c4e8cad3 in ?? ()
  18   LWP 3106 "rpc worker-3106" 0x00007f53c4e8cad3 in ?? ()
  19   LWP 3107 "rpc worker-3107" 0x00007f53c4e8cad3 in ?? ()
  20   LWP 3108 "rpc worker-3108" 0x00007f53c4e8cad3 in ?? ()
  21   LWP 3109 "rpc worker-3109" 0x00007f53c4e8cad3 in ?? ()
  22   LWP 3110 "rpc worker-3110" 0x00007f53c4e8cad3 in ?? ()
  23   LWP 3111 "rpc worker-3111" 0x00007f53c4e8cad3 in ?? ()
  24   LWP 3112 "rpc worker-3112" 0x00007f53c4e8cad3 in ?? ()
  25   LWP 3113 "rpc worker-3113" 0x00007f53c4e8cad3 in ?? ()
  26   LWP 3114 "rpc worker-3114" 0x00007f53c4e8cad3 in ?? ()
  27   LWP 3115 "rpc worker-3115" 0x00007f53c4e8cad3 in ?? ()
  28   LWP 3116 "rpc worker-3116" 0x00007f53c4e8cad3 in ?? ()
  29   LWP 3117 "rpc worker-3117" 0x00007f53c4e8cad3 in ?? ()
  30   LWP 3118 "rpc worker-3118" 0x00007f53c4e8cad3 in ?? ()
  31   LWP 3119 "rpc worker-3119" 0x00007f53c4e8cad3 in ?? ()
  32   LWP 3120 "rpc worker-3120" 0x00007f53c4e8cad3 in ?? ()
  33   LWP 3121 "rpc worker-3121" 0x00007f53c4e8cad3 in ?? ()
  34   LWP 3122 "rpc worker-3122" 0x00007f53c4e8cad3 in ?? ()
  35   LWP 3123 "rpc worker-3123" 0x00007f53c4e8cad3 in ?? ()
  36   LWP 3124 "rpc worker-3124" 0x00007f53c4e8cad3 in ?? ()
  37   LWP 3125 "rpc worker-3125" 0x00007f53c4e8cad3 in ?? ()
  38   LWP 3126 "rpc worker-3126" 0x00007f53c4e8cad3 in ?? ()
  39   LWP 3127 "rpc worker-3127" 0x00007f53c4e8cad3 in ?? ()
  40   LWP 3128 "rpc worker-3128" 0x00007f53c4e8cad3 in ?? ()
  41   LWP 3129 "rpc worker-3129" 0x00007f53c4e8cad3 in ?? ()
  42   LWP 3130 "rpc worker-3130" 0x00007f53c4e8cad3 in ?? ()
  43   LWP 3131 "rpc worker-3131" 0x00007f53c4e8cad3 in ?? ()
  44   LWP 3132 "rpc worker-3132" 0x00007f53c4e8cad3 in ?? ()
  45   LWP 3133 "rpc worker-3133" 0x00007f53c4e8cad3 in ?? ()
  46   LWP 3134 "rpc worker-3134" 0x00007f53c4e8cad3 in ?? ()
  47   LWP 3135 "rpc worker-3135" 0x00007f53c4e8cad3 in ?? ()
  48   LWP 3136 "rpc worker-3136" 0x00007f53c4e8cad3 in ?? ()
  49   LWP 3137 "rpc worker-3137" 0x00007f53c4e8cad3 in ?? ()
  50   LWP 3138 "rpc worker-3138" 0x00007f53c4e8cad3 in ?? ()
  51   LWP 3139 "rpc worker-3139" 0x00007f53c4e8cad3 in ?? ()
  52   LWP 3140 "rpc worker-3140" 0x00007f53c4e8cad3 in ?? ()
  53   LWP 3141 "rpc worker-3141" 0x00007f53c4e8cad3 in ?? ()
  54   LWP 3142 "rpc worker-3142" 0x00007f53c4e8cad3 in ?? ()
  55   LWP 3143 "rpc worker-3143" 0x00007f53c4e8cad3 in ?? ()
  56   LWP 3144 "rpc worker-3144" 0x00007f53c4e8cad3 in ?? ()
  57   LWP 3145 "rpc worker-3145" 0x00007f53c4e8cad3 in ?? ()
  58   LWP 3146 "rpc worker-3146" 0x00007f53c4e8cad3 in ?? ()
  59   LWP 3147 "rpc worker-3147" 0x00007f53c4e8cad3 in ?? ()
  60   LWP 3148 "rpc worker-3148" 0x00007f53c4e8cad3 in ?? ()
  61   LWP 3149 "rpc worker-3149" 0x00007f53c4e8cad3 in ?? ()
  62   LWP 3150 "rpc worker-3150" 0x00007f53c4e8cad3 in ?? ()
  63   LWP 3151 "rpc worker-3151" 0x00007f53c4e8cad3 in ?? ()
  64   LWP 3152 "rpc worker-3152" 0x00007f53c4e8cad3 in ?? ()
  65   LWP 3153 "rpc worker-3153" 0x00007f53c4e8cad3 in ?? ()
  66   LWP 3154 "rpc worker-3154" 0x00007f53c4e8cad3 in ?? ()
  67   LWP 3155 "rpc worker-3155" 0x00007f53c4e8cad3 in ?? ()
  68   LWP 3156 "rpc worker-3156" 0x00007f53c4e8cad3 in ?? ()
  69   LWP 3157 "rpc worker-3157" 0x00007f53c4e8cad3 in ?? ()
  70   LWP 3158 "rpc worker-3158" 0x00007f53c4e8cad3 in ?? ()
  71   LWP 3159 "rpc worker-3159" 0x00007f53c4e8cad3 in ?? ()
  72   LWP 3160 "rpc worker-3160" 0x00007f53c4e8cad3 in ?? ()
  73   LWP 3161 "rpc worker-3161" 0x00007f53c4e8cad3 in ?? ()
  74   LWP 3162 "rpc worker-3162" 0x00007f53c4e8cad3 in ?? ()
  75   LWP 3163 "rpc worker-3163" 0x00007f53c4e8cad3 in ?? ()
  76   LWP 3164 "rpc worker-3164" 0x00007f53c4e8cad3 in ?? ()
  77   LWP 3165 "rpc worker-3165" 0x00007f53c4e8cad3 in ?? ()
  78   LWP 3166 "rpc worker-3166" 0x00007f53c4e8cad3 in ?? ()
  79   LWP 3167 "rpc worker-3167" 0x00007f53c4e8cad3 in ?? ()
  80   LWP 3168 "rpc worker-3168" 0x00007f53c4e8cad3 in ?? ()
  81   LWP 3169 "rpc worker-3169" 0x00007f53c4e8cad3 in ?? ()
  82   LWP 3170 "rpc worker-3170" 0x00007f53c4e8cad3 in ?? ()
  83   LWP 3171 "rpc worker-3171" 0x00007f53c4e8cad3 in ?? ()
  84   LWP 3172 "rpc worker-3172" 0x00007f53c4e8cad3 in ?? ()
  85   LWP 3173 "rpc worker-3173" 0x00007f53c4e8cad3 in ?? ()
  86   LWP 3174 "rpc worker-3174" 0x00007f53c4e8cad3 in ?? ()
  87   LWP 3175 "rpc worker-3175" 0x00007f53c4e8cad3 in ?? ()
  88   LWP 3176 "rpc worker-3176" 0x00007f53c4e8cad3 in ?? ()
  89   LWP 3177 "rpc worker-3177" 0x00007f53c4e8cad3 in ?? ()
  90   LWP 3178 "rpc worker-3178" 0x00007f53c4e8cad3 in ?? ()
  91   LWP 3179 "rpc worker-3179" 0x00007f53c4e8cad3 in ?? ()
  92   LWP 3180 "rpc worker-3180" 0x00007f53c4e8cad3 in ?? ()
  93   LWP 3181 "rpc worker-3181" 0x00007f53c4e8cad3 in ?? ()
  94   LWP 3182 "rpc worker-3182" 0x00007f53c4e8cad3 in ?? ()
  95   LWP 3183 "rpc worker-3183" 0x00007f53c4e8cad3 in ?? ()
  96   LWP 3184 "rpc worker-3184" 0x00007f53c4e8cad3 in ?? ()
  97   LWP 3185 "rpc worker-3185" 0x00007f53c4e8cad3 in ?? ()
  98   LWP 3186 "rpc worker-3186" 0x00007f53c4e8cad3 in ?? ()
  99   LWP 3187 "rpc worker-3187" 0x00007f53c4e8cad3 in ?? ()
  100  LWP 3188 "rpc worker-3188" 0x00007f53c4e8cad3 in ?? ()
  101  LWP 3189 "rpc worker-3189" 0x00007f53c4e8cad3 in ?? ()
  102  LWP 3190 "rpc worker-3190" 0x00007f53c4e8cad3 in ?? ()
  103  LWP 3191 "rpc worker-3191" 0x00007f53c4e8cad3 in ?? ()
  104  LWP 3192 "rpc worker-3192" 0x00007f53c4e8cad3 in ?? ()
  105  LWP 3193 "rpc worker-3193" 0x00007f53c4e8cad3 in ?? ()
  106  LWP 3194 "rpc worker-3194" 0x00007f53c4e8cad3 in ?? ()
  107  LWP 3195 "rpc worker-3195" 0x00007f53c4e8cad3 in ?? ()
  108  LWP 3196 "rpc worker-3196" 0x00007f53c4e8cad3 in ?? ()
  109  LWP 3197 "rpc worker-3197" 0x00007f53c4e8cad3 in ?? ()
  110  LWP 3198 "rpc worker-3198" 0x00007f53c4e8cad3 in ?? ()
  111  LWP 3199 "rpc worker-3199" 0x00007f53c4e8cad3 in ?? ()
  112  LWP 3200 "rpc worker-3200" 0x00007f53c4e8cad3 in ?? ()
  113  LWP 3201 "rpc worker-3201" 0x00007f53c4e8cad3 in ?? ()
  114  LWP 3202 "rpc worker-3202" 0x00007f53c4e8cad3 in ?? ()
  115  LWP 3203 "rpc worker-3203" 0x00007f53c4e8cad3 in ?? ()
  116  LWP 3204 "rpc worker-3204" 0x00007f53c4e8cad3 in ?? ()
  117  LWP 3205 "diag-logger-320" 0x00007f53c4e8cfb9 in ?? ()
  118  LWP 3206 "result-tracker-" 0x00007f53c4e8cfb9 in ?? ()
  119  LWP 3207 "excess-log-dele" 0x00007f53c4e8cfb9 in ?? ()
  120  LWP 3208 "acceptor-3208" 0x00007f53bd9d1fc7 in ?? ()
  121  LWP 3209 "heartbeat-3209" 0x00007f53c4e8cfb9 in ?? ()
  122  LWP 3210 "maintenance_sch" 0x00007f53c4e8cfb9 in ?? ()

Thread 122 (LWP 3210):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f5379cb0700 in ?? ()
#2  0x0000000000000087 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000616000016ca0 in ?? ()
#5  0x00007f5379cb0750 in ?? ()
#6  0x000000000000010e in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 121 (LWP 3209):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x4b5301aec691978b in ?? ()
#2  0x0000000000000025 in ?? ()
#3  0x0000000100000081 in ?? ()
#4  0x000061300001d844 in ?? ()
#5  0x00007f537a4c8610 in ?? ()
#6  0x000000000000004b in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x00007f537a4c8630 in ?? ()
#9  0x0000000200000000 in ?? ()
#10 0x0000008000000189 in ?? ()
#11 0x00007f537a4c86b0 in ?? ()
#12 0x00000fea6f4990d8 in ?? ()
#13 0x0000000000000000 in ?? ()

Thread 120 (LWP 3208):
#0  0x00007f53bd9d1fc7 in ?? ()
#1  0x00006140000224a8 in ?? ()
#2  0x00007f537acefd70 in ?? ()
#3  0x00007f537acefda0 in ?? ()
#4  0x00007f537acefea0 in ?? ()
#5  0x00007f537acefd90 in ?? ()
#6  0x00007f537acefe00 in ?? ()
#7  0x0000000000000080 in ?? ()
#8  0x00000000008d957b in __sanitizer::theDepot ()
#9  0x0000000500000014 in ?? ()
#10 0x00007f537aceff20 in ?? ()
#11 0x00007f537acef65c in ?? ()
#12 0x000000327acef5d0 in ?? ()
#13 0x00007f537a4f3000 in ?? ()
#14 0x0000000000000000 in ?? ()

Thread 119 (LWP 3207):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f537b509f60 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 118 (LWP 3206):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f537bd22120 in ?? ()
#2  0x0000000000000021 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061100008caf0 in ?? ()
#5  0x00007f537bd22110 in ?? ()
#6  0x0000000000000042 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 117 (LWP 3205):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 116 (LWP 3204):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 115 (LWP 3203):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 114 (LWP 3202):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 113 (LWP 3201):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 112 (LWP 3200):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 111 (LWP 3199):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 110 (LWP 3198):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 109 (LWP 3197):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 108 (LWP 3196):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 107 (LWP 3195):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 106 (LWP 3194):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 105 (LWP 3193):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 104 (LWP 3192):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 103 (LWP 3191):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 102 (LWP 3190):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 101 (LWP 3189):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 100 (LWP 3188):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 99 (LWP 3187):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 98 (LWP 3186):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 97 (LWP 3185):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 96 (LWP 3184):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 95 (LWP 3183):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 94 (LWP 3182):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 93 (LWP 3181):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 92 (LWP 3180):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 91 (LWP 3179):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 90 (LWP 3178):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 89 (LWP 3177):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 88 (LWP 3176):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 87 (LWP 3175):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 86 (LWP 3174):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 85 (LWP 3173):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 84 (LWP 3172):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 83 (LWP 3171):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 82 (LWP 3170):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 81 (LWP 3169):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 80 (LWP 3168):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 79 (LWP 3167):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 78 (LWP 3166):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 77 (LWP 3165):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 76 (LWP 3164):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000941 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0001a688c in ?? ()
#4  0x00007f5391122eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f5391122ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0001a6840 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f5391122ed0 in ?? ()
#11 0x00007f5391122e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 75 (LWP 3163):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000856 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0001a0088 in ?? ()
#4  0x00007f539193aeb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f539193aed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 74 (LWP 3162):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 73 (LWP 3161):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 72 (LWP 3160):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 71 (LWP 3159):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 70 (LWP 3158):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 69 (LWP 3157):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 68 (LWP 3156):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 67 (LWP 3155):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 66 (LWP 3154):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 65 (LWP 3153):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 64 (LWP 3152):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 63 (LWP 3151):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 62 (LWP 3150):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 61 (LWP 3149):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 60 (LWP 3148):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 59 (LWP 3147):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 58 (LWP 3146):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 57 (LWP 3145):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 56 (LWP 3144):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00012003c in ?? ()
#4  0x00007f539b303eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f539b303ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00011fff0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f539b303ed0 in ?? ()
#11 0x00007f539b303e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 55 (LWP 3143):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 54 (LWP 3142):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 53 (LWP 3141):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 52 (LWP 3140):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 51 (LWP 3139):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 50 (LWP 3138):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 49 (LWP 3137):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 48 (LWP 3136):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 47 (LWP 3135):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 46 (LWP 3134):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 45 (LWP 3133):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 44 (LWP 3132):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 43 (LWP 3131):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 42 (LWP 3130):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 41 (LWP 3129):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 40 (LWP 3128):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 39 (LWP 3127):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 38 (LWP 3126):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 37 (LWP 3125):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 36 (LWP 3124):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d00009ffe8 in ?? ()
#4  0x00007f53a54e4eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a54e4ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 35 (LWP 3123):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0000967fc in ?? ()
#4  0x00007f53a5cfceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a5cfced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000967b0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a5cfced0 in ?? ()
#11 0x00007f53a5cfce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 34 (LWP 3122):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008fffc in ?? ()
#4  0x00007f53a6514eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a6514ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00008ffb0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a6514ed0 in ?? ()
#11 0x00007f53a6514e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 33 (LWP 3121):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008d00c in ?? ()
#4  0x00007f53a6d2ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a6d2ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00008cfc0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a6d2ced0 in ?? ()
#11 0x00007f53a6d2ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 32 (LWP 3120):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008680c in ?? ()
#4  0x00007f53a7544eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a7544ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000867c0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a7544ed0 in ?? ()
#11 0x00007f53a7544e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 31 (LWP 3119):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008000c in ?? ()
#4  0x00007f53a7d5ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a7d5ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00007ffc0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a7d5ced0 in ?? ()
#11 0x00007f53a7d5ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 30 (LWP 3118):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00007681c in ?? ()
#4  0x00007f53a8574eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a8574ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000767d0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a8574ed0 in ?? ()
#11 0x00007f53a8574e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 29 (LWP 3117):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00007001c in ?? ()
#4  0x00007f53a8d8ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a8d8ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00006ffd0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a8d8ced0 in ?? ()
#11 0x00007f53a8d8ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 28 (LWP 3116):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 27 (LWP 3115):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 26 (LWP 3114):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 25 (LWP 3113):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 24 (LWP 3112):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 23 (LWP 3111):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 22 (LWP 3110):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 21 (LWP 3109):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 20 (LWP 3108):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 19 (LWP 3107):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 18 (LWP 3106):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 17 (LWP 3105):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 16 (LWP 3104):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53af6e9ce0 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000613000020060 in ?? ()
#5  0x00007f53af6e9cd0 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 15 (LWP 3103):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x4008000000000000 in ?? ()
#2  0x0000000000000006 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200001fe98 in ?? ()
#5  0x00007f53aff11270 in ?? ()
#6  0x000000000000000c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 14 (LWP 3102):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53b0727260 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061800000c9a8 in ?? ()
#5  0x00007f53b0727250 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 13 (LWP 3101):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 12 (LWP 3100):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b1776340 in ?? ()
#2  0x000061a00000c680 in ?? ()
#3  0x00007f53b1776330 in ?? ()
#4  0x00007f53b1776540 in ?? ()
#5  0x00007f53b1776380 in ?? ()
#6  0x0000614000022698 in ?? ()
#7  0x00007f53b1776400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb97e0c34f53000 in ?? ()
#10 0x000061a00000c680 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c680 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000feaf62e6c80 in ?? ()
#17 0x00007f53b17763e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x00007f53b17763b0 in ?? ()
#20 0x3fb97e0c34f53000 in ?? ()
#21 0x00000000b1776400 in ?? ()
#22 0x000061a00000c680 in ?? ()
#23 0x0000614000022698 in ?? ()
#24 0x3fb97e0c34f53000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 11 (LWP 3099):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b1f8d340 in ?? ()
#2  0x000061a00000c080 in ?? ()
#3  0x00007f53b1f8d330 in ?? ()
#4  0x00007f53b1f8d540 in ?? ()
#5  0x00007f53b1f8d380 in ?? ()
#6  0x0000614000022498 in ?? ()
#7  0x00007f53b1f8d400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb9760dbd245000 in ?? ()
#10 0x000061a00000c080 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c080 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000feaf63e9a80 in ?? ()
#17 0x00007f53b1f8d3e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x00007f53b1f8d3b0 in ?? ()
#20 0x3fb9760dbd245000 in ?? ()
#21 0x00000000b1f8d400 in ?? ()
#22 0x000061a00000c080 in ?? ()
#23 0x0000614000022498 in ?? ()
#24 0x3fb9760dbd245000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 10 (LWP 3098):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b27a4340 in ?? ()
#2  0x000061a00000ba80 in ?? ()
#3  0x00007f53b27a4330 in ?? ()
#4  0x00007f53b27a4540 in ?? ()
#5  0x00007f53b27a4380 in ?? ()
#6  0x0000614000022298 in ?? ()
#7  0x00007f53b27a4400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb957c3719e1000 in ?? ()
#10 0x000061a00000ba80 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000ba80 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000feaf64ec880 in ?? ()
#17 0x00007f53b27a43e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x00007f53b27a43b0 in ?? ()
#20 0x3fb957c3719e1000 in ?? ()
#21 0x00000000b27a4400 in ?? ()
#22 0x000061a00000ba80 in ?? ()
#23 0x0000614000022298 in ?? ()
#24 0x3fb957c3719e1000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 9 (LWP 3097):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b47ad340 in ?? ()
#2  0x000061a00000b480 in ?? ()
#3  0x00007f53b47ad330 in ?? ()
#4  0x00007f53b47ad540 in ?? ()
#5  0x00007f53b47ad380 in ?? ()
#6  0x0000614000022098 in ?? ()
#7  0x00007f53b47ad400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb976a135e78000 in ?? ()
#10 0x000061a00000b480 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000b480 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff287 in ?? ()
#16 0x00000feaf68eda80 in ?? ()
#17 0x00007f53b47ad3e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x0000000000000000 in ?? ()

Thread 8 (LWP 3094):
#0  0x00007f53bd9c3bb9 in ?? ()
#1  0x00000000000000c8 in ?? ()
#2  0x00007f53b61b77b8 in ?? ()
#3  0x000060200001c790 in ?? ()
#4  0x0000000000000002 in ?? ()
#5  0x00000000000000c8 in ?? ()
#6  0x00000000008d11c1 in __sanitizer::theDepot ()
#7  0x0000000000000000 in ?? ()

Thread 7 (LWP 3093):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53b59b60e0 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 6 (LWP 3092):
#0  0x00007f53c4e909e2 in ?? ()
#1  0x00007f53b51b3bc0 in ?? ()
#2  0x00007f53b51b3c60 in ?? ()
#3  0x000000000000001e in ?? ()
#4  0x0000000000000030 in ?? ()
#5  0x00007f53b51b3c10 in ?? ()
#6  0x00000000017d0860 in ?? ()
#7  0x00007f53b51b3c70 in ?? ()
#8  0x000061100008d450 in ?? ()
#9  0x00007f53b51b3c60 in ?? ()
#10 0x00000000008cb6b7 in __sanitizer::theDepot ()
#11 0x00007f53ca839bfc in ?? ()
#12 0x00007f53ca829209 in ?? ()
#13 0x00007f53ca82d7f6 in ?? ()
#14 0x00007f53ca832230 in ?? ()
#15 0x00007f53ca832059 in ?? ()
#16 0x0000000000aa4cad in __sanitizer::theDepot ()
#17 0x00007f53c18b8529 in ?? ()
#18 0x00007f53c4e866db in ?? ()
#19 0x00000feaf6a2e688 in ?? ()
#20 0x00007f53b51b3460 in ?? ()
#21 0x0000000000000000 in ?? ()

Thread 5 (LWP 3086):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53b71b8ca0 in ?? ()
#2  0x00000000000000aa in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200000c728 in ?? ()
#5  0x00007f53b71b8c90 in ?? ()
#6  0x0000000000000154 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 4 (LWP 3085):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 3 (LWP 3084):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 2 (LWP 3083):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x5f5347414c46000a in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000612000012b38 in ?? ()
#5  0x00007f53b89bc450 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 1 (LWP 3082):
#0  0x00007f53c4e90d50 in ?? ()
#1  0x0000000000000000 in ?? ()
************************* END STACKS ***************************
I20260430 07:55:16.570231   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2820
I20260430 07:55:16.572376  2942 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:16.887586   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2820
W20260430 07:55:16.949855  3100 connection.cc:570] server connection from 127.0.105.1:35387 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20260430 07:55:16.950189  2968 connection.cc:570] server connection from 127.0.105.1:38369 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:55:16.950459   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2951
W20260430 07:55:16.950716  2747 connection.cc:570] server connection from 127.0.105.1:58277 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:55:16.952404  3073 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:17.205557   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2951
I20260430 07:55:17.262499   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3082
I20260430 07:55:17.264292  3204 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:17.537568   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3082
I20260430 07:55:17.597476   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2730
I20260430 07:55:17.598734  2791 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:17.715852   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2730
2026-04-30T07:55:17Z chronyd exiting
I20260430 07:55:17.755309   420 test_util.cc:182] -----------------------------------------------
I20260430 07:55:17.755452   420 test_util.cc:183] Had failures, leaving test files at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0

Full log

Note: This is test shard 1 of 6.
[==========] Running 5 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 4 tests from TabletCopyITest
[ RUN      ] TabletCopyITest.TestRejectRogueLeader
2026-04-30T07:53:50Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2026-04-30T07:53:50Z Disabled control of system clock
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20260430 07:53:50.068619   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:35837
--webserver_interface=127.0.105.62
--webserver_port=0
--builtin_ntp_servers=127.0.105.20:42495
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:35837
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20260430 07:53:50.520516   428 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:53:50.520988   428 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:53:50.521136   428 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:53:50.532712   428 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:53:50.532899   428 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:53:50.533002   428 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:53:50.533088   428 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:53:50.545133   428 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:42495
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:35837
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:35837
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:53:50.547113   428 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:53:50.549150   428 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:53:50.561266   434 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:50.562242   436 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:50.562587   433 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:53:50.565841   428 server_base.cc:1061] running on GCE node
I20260430 07:53:50.567444   428 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:53:50.569404   428 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:53:50.570673   428 hybrid_clock.cc:648] HybridClock initialized: now 1777535630570530 us; error 150 us; skew 500 ppm
I20260430 07:53:50.571671   428 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:53:50.575151   428 webserver.cc:492] Webserver started at http://127.0.105.62:46279/ using document root <none> and password file <none>
I20260430 07:53:50.576018   428 fs_manager.cc:362] Metadata directory not provided
I20260430 07:53:50.576124   428 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:53:50.576464   428 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:53:50.580219   428 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data/instance:
uuid: "97d4678d0a53426680d96f6e444ec320"
format_stamp: "Formatted at 2026-04-30 07:53:50 on dist-test-slave-1g5s"
I20260430 07:53:50.581475   428 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal/instance:
uuid: "97d4678d0a53426680d96f6e444ec320"
format_stamp: "Formatted at 2026-04-30 07:53:50 on dist-test-slave-1g5s"
I20260430 07:53:50.589285   428 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.007s	sys 0.001s
I20260430 07:53:50.593837   442 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:50.596319   428 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.003s	sys 0.002s
I20260430 07:53:50.596611   428 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "97d4678d0a53426680d96f6e444ec320"
format_stamp: "Formatted at 2026-04-30 07:53:50 on dist-test-slave-1g5s"
I20260430 07:53:50.596892   428 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:53:50.633666   428 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:53:50.634706   428 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:53:50.635056   428 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:53:50.668941   428 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:35837
I20260430 07:53:50.668941   493 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:35837 every 8 connection(s)
I20260430 07:53:50.671641   428 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:53:50.678157   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 428
I20260430 07:53:50.678949   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/master-0/wal/instance
I20260430 07:53:50.679677   494 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:53:50.696017   494 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320: Bootstrap starting.
I20260430 07:53:50.702183   494 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320: Neither blocks nor log segments found. Creating new log.
I20260430 07:53:50.704267   494 log.cc:826] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320: Log is configured to *not* fsync() on all Append() calls
I20260430 07:53:50.709789   494 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320: No bootstrap required, opened a new log
I20260430 07:53:50.719022   494 raft_consensus.cc:359] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "97d4678d0a53426680d96f6e444ec320" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 35837 } }
I20260430 07:53:50.719683   494 raft_consensus.cc:385] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:53:50.719861   494 raft_consensus.cc:740] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 97d4678d0a53426680d96f6e444ec320, State: Initialized, Role: FOLLOWER
I20260430 07:53:50.720785   494 consensus_queue.cc:260] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "97d4678d0a53426680d96f6e444ec320" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 35837 } }
I20260430 07:53:50.721122   494 raft_consensus.cc:399] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:53:50.721319   494 raft_consensus.cc:493] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:53:50.721539   494 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:53:50.724455   494 raft_consensus.cc:515] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "97d4678d0a53426680d96f6e444ec320" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 35837 } }
I20260430 07:53:50.725845   494 leader_election.cc:304] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 97d4678d0a53426680d96f6e444ec320; no voters: 
I20260430 07:53:50.727481   494 leader_election.cc:290] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:53:50.728438   499 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:53:50.730283   499 raft_consensus.cc:697] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [term 1 LEADER]: Becoming Leader. State: Replica: 97d4678d0a53426680d96f6e444ec320, State: Running, Role: LEADER
I20260430 07:53:50.731312   499 consensus_queue.cc:237] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "97d4678d0a53426680d96f6e444ec320" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 35837 } }
I20260430 07:53:50.732827   494 sys_catalog.cc:565] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:53:50.735702   501 sys_catalog.cc:455] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 97d4678d0a53426680d96f6e444ec320. Latest consensus state: current_term: 1 leader_uuid: "97d4678d0a53426680d96f6e444ec320" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "97d4678d0a53426680d96f6e444ec320" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 35837 } } }
I20260430 07:53:50.736312   501 sys_catalog.cc:458] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [sys.catalog]: This master's current role is: LEADER
I20260430 07:53:50.736603   500 sys_catalog.cc:455] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "97d4678d0a53426680d96f6e444ec320" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "97d4678d0a53426680d96f6e444ec320" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 35837 } } }
I20260430 07:53:50.736963   500 sys_catalog.cc:458] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320 [sys.catalog]: This master's current role is: LEADER
I20260430 07:53:50.739157   504 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:53:50.748059   504 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:53:50.762087   504 catalog_manager.cc:1357] Generated new cluster ID: c336880c13a348ad8521cc1a34f531eb
I20260430 07:53:50.762298   504 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:53:50.784696   504 catalog_manager.cc:1380] Generated new certificate authority record
I20260430 07:53:50.786314   504 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:53:50.807719   504 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 97d4678d0a53426680d96f6e444ec320: Generated new TSK 0
I20260430 07:53:50.808813   504 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:53:50.825078   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:0
--local_ip_for_outbound_sockets=127.0.105.1
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:35837
--builtin_ntp_servers=127.0.105.20:42495
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20260430 07:53:51.449301   518 flags.cc:432] Enabled unsafe flag: --enable_leader_failure_detection=false
W20260430 07:53:51.449679   518 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:53:51.449739   518 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:53:51.449831   518 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:53:51.461740   518 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:53:51.461954   518 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:53:51.474365   518 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:42495
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:35837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:53:51.477460   518 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:53:51.479668   518 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:53:51.492200   524 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:51.492318   523 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:51.494135   526 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:53:51.495004   518 server_base.cc:1061] running on GCE node
I20260430 07:53:51.495957   518 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:53:51.497128   518 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:53:51.498490   518 hybrid_clock.cc:648] HybridClock initialized: now 1777535631498377 us; error 71 us; skew 500 ppm
I20260430 07:53:51.498888   518 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:53:51.501857   518 webserver.cc:492] Webserver started at http://127.0.105.1:35577/ using document root <none> and password file <none>
I20260430 07:53:51.502740   518 fs_manager.cc:362] Metadata directory not provided
I20260430 07:53:51.502921   518 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:53:51.503301   518 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:53:51.506467   518 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data/instance:
uuid: "6e78d688969848c88f7579ee4d45ac3b"
format_stamp: "Formatted at 2026-04-30 07:53:51 on dist-test-slave-1g5s"
I20260430 07:53:51.507395   518 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal/instance:
uuid: "6e78d688969848c88f7579ee4d45ac3b"
format_stamp: "Formatted at 2026-04-30 07:53:51 on dist-test-slave-1g5s"
I20260430 07:53:51.514513   518 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.006s	sys 0.001s
I20260430 07:53:51.519984   532 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:51.522128   518 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.002s	sys 0.000s
I20260430 07:53:51.522377   518 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "6e78d688969848c88f7579ee4d45ac3b"
format_stamp: "Formatted at 2026-04-30 07:53:51 on dist-test-slave-1g5s"
I20260430 07:53:51.522622   518 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:53:51.567464   518 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:53:51.569231   518 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:53:51.569690   518 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:53:51.570917   518 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:53:51.572993   518 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:53:51.573081   518 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:51.573238   518 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:53:51.573323   518 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.001s
I20260430 07:53:51.627996   518 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:41237
I20260430 07:53:51.628046   644 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:41237 every 8 connection(s)
I20260430 07:53:51.633508   518 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:53:51.637061   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 518
I20260430 07:53:51.637343   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal/instance
I20260430 07:53:51.644397   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:0
--local_ip_for_outbound_sockets=127.0.105.2
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:35837
--builtin_ntp_servers=127.0.105.20:42495
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20260430 07:53:51.661597   645 heartbeater.cc:344] Connected to a master server at 127.0.105.62:35837
I20260430 07:53:51.662322   645 heartbeater.cc:461] Registering TS with master...
I20260430 07:53:51.663547   645 heartbeater.cc:507] Master 127.0.105.62:35837 requested a full tablet report, sending...
I20260430 07:53:51.669286   459 ts_manager.cc:194] Registered new tserver with Master: 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237)
I20260430 07:53:51.672012   459 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:47439
W20260430 07:53:52.078603   649 flags.cc:432] Enabled unsafe flag: --enable_leader_failure_detection=false
W20260430 07:53:52.079218   649 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:53:52.079349   649 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:53:52.079535   649 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:53:52.092377   649 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:53:52.092669   649 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:53:52.106494   649 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:42495
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:35837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:53:52.108264   649 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:53:52.110291   649 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:53:52.122471   655 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:52.122464   654 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:52.124594   657 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:53:52.125624   649 server_base.cc:1061] running on GCE node
I20260430 07:53:52.126710   649 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:53:52.128113   649 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:53:52.129474   649 hybrid_clock.cc:648] HybridClock initialized: now 1777535632129421 us; error 158 us; skew 500 ppm
I20260430 07:53:52.129851   649 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:53:52.133073   649 webserver.cc:492] Webserver started at http://127.0.105.2:40213/ using document root <none> and password file <none>
I20260430 07:53:52.133989   649 fs_manager.cc:362] Metadata directory not provided
I20260430 07:53:52.134166   649 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:53:52.134655   649 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:53:52.137879   649 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data/instance:
uuid: "01ce1dae9db0472eb9a08ac196bde0cc"
format_stamp: "Formatted at 2026-04-30 07:53:52 on dist-test-slave-1g5s"
I20260430 07:53:52.139335   649 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal/instance:
uuid: "01ce1dae9db0472eb9a08ac196bde0cc"
format_stamp: "Formatted at 2026-04-30 07:53:52 on dist-test-slave-1g5s"
I20260430 07:53:52.148452   649 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.009s	sys 0.000s
I20260430 07:53:52.154021   663 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:52.156232   649 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.002s	sys 0.002s
I20260430 07:53:52.156599   649 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "01ce1dae9db0472eb9a08ac196bde0cc"
format_stamp: "Formatted at 2026-04-30 07:53:52 on dist-test-slave-1g5s"
I20260430 07:53:52.156850   649 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:53:52.197077   649 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:53:52.198158   649 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:53:52.198539   649 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:53:52.199815   649 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:53:52.202248   649 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:53:52.202378   649 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:52.202497   649 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:53:52.202574   649 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:52.253012   649 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:36401
I20260430 07:53:52.253083   775 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:36401 every 8 connection(s)
I20260430 07:53:52.255033   649 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:53:52.263850   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 649
I20260430 07:53:52.264051   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-1/wal/instance
I20260430 07:53:52.268242   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.3:0
--local_ip_for_outbound_sockets=127.0.105.3
--webserver_interface=127.0.105.3
--webserver_port=0
--tserver_master_addrs=127.0.105.62:35837
--builtin_ntp_servers=127.0.105.20:42495
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20260430 07:53:52.273263   776 heartbeater.cc:344] Connected to a master server at 127.0.105.62:35837
I20260430 07:53:52.273816   776 heartbeater.cc:461] Registering TS with master...
I20260430 07:53:52.275112   776 heartbeater.cc:507] Master 127.0.105.62:35837 requested a full tablet report, sending...
I20260430 07:53:52.277254   459 ts_manager.cc:194] Registered new tserver with Master: 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401)
I20260430 07:53:52.278512   459 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:50035
W20260430 07:53:52.666386   780 flags.cc:432] Enabled unsafe flag: --enable_leader_failure_detection=false
W20260430 07:53:52.666822   780 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:53:52.666919   780 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:53:52.667078   780 flags.cc:432] Enabled unsafe flag: --never_fsync=true
I20260430 07:53:52.676862   645 heartbeater.cc:499] Master 127.0.105.62:35837 was elected leader, sending a full tablet report...
W20260430 07:53:52.681241   780 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:53:52.681512   780 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.3
I20260430 07:53:52.693567   780 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:42495
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.105.3
--webserver_port=0
--tserver_master_addrs=127.0.105.62:35837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.3
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:53:52.695338   780 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:53:52.697326   780 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:53:52.710654   788 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:52.710795   785 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:53:52.710673   786 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:53:52.711282   780 server_base.cc:1061] running on GCE node
I20260430 07:53:52.712179   780 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:53:52.713536   780 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:53:52.715062   780 hybrid_clock.cc:648] HybridClock initialized: now 1777535632714957 us; error 88 us; skew 500 ppm
I20260430 07:53:52.715533   780 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:53:52.718441   780 webserver.cc:492] Webserver started at http://127.0.105.3:37163/ using document root <none> and password file <none>
I20260430 07:53:52.719359   780 fs_manager.cc:362] Metadata directory not provided
I20260430 07:53:52.719547   780 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:53:52.719986   780 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:53:52.723266   780 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data/instance:
uuid: "8d39b7f876294f979df191c0eb53602e"
format_stamp: "Formatted at 2026-04-30 07:53:52 on dist-test-slave-1g5s"
I20260430 07:53:52.724210   780 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal/instance:
uuid: "8d39b7f876294f979df191c0eb53602e"
format_stamp: "Formatted at 2026-04-30 07:53:52 on dist-test-slave-1g5s"
I20260430 07:53:52.731926   780 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.004s	sys 0.004s
I20260430 07:53:52.736578   794 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:52.738673   780 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.001s	sys 0.001s
I20260430 07:53:52.738919   780 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal
uuid: "8d39b7f876294f979df191c0eb53602e"
format_stamp: "Formatted at 2026-04-30 07:53:52 on dist-test-slave-1g5s"
I20260430 07:53:52.739183   780 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:53:52.758162   780 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:53:52.759147   780 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:53:52.759469   780 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:53:52.760569   780 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:53:52.762957   780 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:53:52.763073   780 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:52.763223   780 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:53:52.763326   780 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:53:52.808192   780 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.3:43025
I20260430 07:53:52.808233   906 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.3:43025 every 8 connection(s)
I20260430 07:53:52.810096   780 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
I20260430 07:53:52.814101   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 780
I20260430 07:53:52.814342   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-2/wal/instance
I20260430 07:53:52.827931   907 heartbeater.cc:344] Connected to a master server at 127.0.105.62:35837
I20260430 07:53:52.828317   907 heartbeater.cc:461] Registering TS with master...
I20260430 07:53:52.829358   907 heartbeater.cc:507] Master 127.0.105.62:35837 requested a full tablet report, sending...
I20260430 07:53:52.831329   459 ts_manager.cc:194] Registered new tserver with Master: 8d39b7f876294f979df191c0eb53602e (127.0.105.3:43025)
I20260430 07:53:52.832360   459 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.3:60725
I20260430 07:53:52.833069   420 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20260430 07:53:52.886349   459 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51352:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20260430 07:53:52.888098   459 catalog_manager.cc:7033] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20260430 07:53:52.926219   842 tablet_service.cc:1511] Processing CreateTablet for tablet 5c76e8da6f20412c8d43c20ebd6bf579 (DEFAULT_TABLE table=test-workload [id=e922619a3b364a14a5c2ed5eec6d0fd9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:53:52.926587   711 tablet_service.cc:1511] Processing CreateTablet for tablet 5c76e8da6f20412c8d43c20ebd6bf579 (DEFAULT_TABLE table=test-workload [id=e922619a3b364a14a5c2ed5eec6d0fd9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:53:52.928440   842 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5c76e8da6f20412c8d43c20ebd6bf579. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:53:52.928423   711 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5c76e8da6f20412c8d43c20ebd6bf579. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:53:52.929823   580 tablet_service.cc:1511] Processing CreateTablet for tablet 5c76e8da6f20412c8d43c20ebd6bf579 (DEFAULT_TABLE table=test-workload [id=e922619a3b364a14a5c2ed5eec6d0fd9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:53:52.931878   580 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5c76e8da6f20412c8d43c20ebd6bf579. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:53:52.943889   931 tablet_bootstrap.cc:492] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Bootstrap starting.
I20260430 07:53:52.945770   932 tablet_bootstrap.cc:492] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: Bootstrap starting.
I20260430 07:53:52.948834   933 tablet_bootstrap.cc:492] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Bootstrap starting.
I20260430 07:53:52.952147   931 tablet_bootstrap.cc:654] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Neither blocks nor log segments found. Creating new log.
I20260430 07:53:52.952749   932 tablet_bootstrap.cc:654] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: Neither blocks nor log segments found. Creating new log.
I20260430 07:53:52.953701   933 tablet_bootstrap.cc:654] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Neither blocks nor log segments found. Creating new log.
I20260430 07:53:52.953910   931 log.cc:826] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Log is configured to *not* fsync() on all Append() calls
I20260430 07:53:52.954135   932 log.cc:826] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: Log is configured to *not* fsync() on all Append() calls
I20260430 07:53:52.955741   933 log.cc:826] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Log is configured to *not* fsync() on all Append() calls
I20260430 07:53:52.957258   931 tablet_bootstrap.cc:492] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: No bootstrap required, opened a new log
I20260430 07:53:52.957656   931 ts_tablet_manager.cc:1403] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Time spent bootstrapping tablet: real 0.014s	user 0.005s	sys 0.006s
I20260430 07:53:52.958703   933 tablet_bootstrap.cc:492] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: No bootstrap required, opened a new log
I20260430 07:53:52.959021   933 ts_tablet_manager.cc:1403] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Time spent bootstrapping tablet: real 0.011s	user 0.004s	sys 0.005s
I20260430 07:53:52.959108   932 tablet_bootstrap.cc:492] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: No bootstrap required, opened a new log
I20260430 07:53:52.959604   932 ts_tablet_manager.cc:1403] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: Time spent bootstrapping tablet: real 0.014s	user 0.010s	sys 0.000s
I20260430 07:53:52.965880   933 raft_consensus.cc:359] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:52.966347   933 raft_consensus.cc:740] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6e78d688969848c88f7579ee4d45ac3b, State: Initialized, Role: FOLLOWER
I20260430 07:53:52.966849   933 consensus_queue.cc:260] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:52.968617   933 ts_tablet_manager.cc:1434] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Time spent starting tablet: real 0.009s	user 0.010s	sys 0.000s
I20260430 07:53:52.968343   932 raft_consensus.cc:359] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:52.968832   932 raft_consensus.cc:740] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8d39b7f876294f979df191c0eb53602e, State: Initialized, Role: FOLLOWER
I20260430 07:53:52.968866   931 raft_consensus.cc:359] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:52.969485   931 raft_consensus.cc:740] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 01ce1dae9db0472eb9a08ac196bde0cc, State: Initialized, Role: FOLLOWER
I20260430 07:53:52.969514   932 consensus_queue.cc:260] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:52.970111   931 consensus_queue.cc:260] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:52.970901   907 heartbeater.cc:499] Master 127.0.105.62:35837 was elected leader, sending a full tablet report...
I20260430 07:53:52.971899   932 ts_tablet_manager.cc:1434] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: Time spent starting tablet: real 0.012s	user 0.011s	sys 0.000s
I20260430 07:53:52.974265   776 heartbeater.cc:499] Master 127.0.105.62:35837 was elected leader, sending a full tablet report...
I20260430 07:53:52.974782   931 ts_tablet_manager.cc:1434] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Time spent starting tablet: real 0.017s	user 0.014s	sys 0.000s
W20260430 07:53:53.007678   777 tablet.cc:2404] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:53:53.020530   420 tablet_copy-itest.cc:200] loading data...
I20260430 07:53:53.022186   731 tablet_service.cc:2044] Received Run Leader Election RPC: tablet_id: "5c76e8da6f20412c8d43c20ebd6bf579"
dest_uuid: "01ce1dae9db0472eb9a08ac196bde0cc"
 from {username='slave'} at 127.0.0.1:59886
I20260430 07:53:53.022585   731 raft_consensus.cc:493] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20260430 07:53:53.022783   731 raft_consensus.cc:3060] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:53:53.025101   731 raft_consensus.cc:515] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:53.027027   731 leader_election.cc:290] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [CANDIDATE]: Term 1 election: Requested vote from peers 8d39b7f876294f979df191c0eb53602e (127.0.105.3:43025), 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237)
I20260430 07:53:53.039129   862 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "5c76e8da6f20412c8d43c20ebd6bf579" candidate_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "8d39b7f876294f979df191c0eb53602e"
I20260430 07:53:53.039880   862 raft_consensus.cc:3060] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:53:53.043504   600 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "5c76e8da6f20412c8d43c20ebd6bf579" candidate_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "6e78d688969848c88f7579ee4d45ac3b"
I20260430 07:53:53.044013   600 raft_consensus.cc:3060] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:53:53.044035   862 raft_consensus.cc:2468] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 01ce1dae9db0472eb9a08ac196bde0cc in term 1.
I20260430 07:53:53.049059   600 raft_consensus.cc:2468] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 01ce1dae9db0472eb9a08ac196bde0cc in term 1.
I20260430 07:53:53.050874   666 leader_election.cc:304] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 01ce1dae9db0472eb9a08ac196bde0cc, 8d39b7f876294f979df191c0eb53602e; no voters: 
I20260430 07:53:53.051790   939 raft_consensus.cc:2804] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:53:53.053079   939 raft_consensus.cc:697] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [term 1 LEADER]: Becoming Leader. State: Replica: 01ce1dae9db0472eb9a08ac196bde0cc, State: Running, Role: LEADER
I20260430 07:53:53.053939   939 consensus_queue.cc:237] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
W20260430 07:53:53.062701   908 tablet.cc:2404] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:53:53.086068   459 catalog_manager.cc:5671] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc reported cstate change: term changed from 0 to 1, leader changed from <none> to 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2). New cstate: current_term: 1 leader_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } health_report { overall_health: UNKNOWN } } }
W20260430 07:53:53.138048   646 tablet.cc:2404] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:53:53.151402   600 raft_consensus.cc:1275] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 1 FOLLOWER]: Refusing update from remote peer 01ce1dae9db0472eb9a08ac196bde0cc: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:53:53.151402   862 raft_consensus.cc:1275] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 1 FOLLOWER]: Refusing update from remote peer 01ce1dae9db0472eb9a08ac196bde0cc: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:53:53.152582   939 consensus_queue.cc:1048] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:53:53.152994   947 consensus_queue.cc:1048] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:53:53.174717   950 mvcc.cc:204] Tried to move back new op lower bound from 7280785953373798400 to 7280785952997273600. Current Snapshot: MvccSnapshot[applied={T|T < 7280785953373798400}]
I20260430 07:53:53.177284   951 mvcc.cc:204] Tried to move back new op lower bound from 7280785953373798400 to 7280785952997273600. Current Snapshot: MvccSnapshot[applied={T|T < 7280785953373798400}]
W20260430 07:53:53.325295   966 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 0, which is lower than last-logged term 1 on local replica. Rejecting tablet copy request
I20260430 07:53:53.339860   862 tablet_service.cc:2044] Received Run Leader Election RPC: tablet_id: "5c76e8da6f20412c8d43c20ebd6bf579"
dest_uuid: "8d39b7f876294f979df191c0eb53602e"
 from {username='slave'} at 127.0.0.1:42452
I20260430 07:53:53.340195   862 raft_consensus.cc:493] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20260430 07:53:53.340338   862 raft_consensus.cc:3060] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:53:53.342871   862 raft_consensus.cc:515] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:53.344755   862 leader_election.cc:290] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [CANDIDATE]: Term 2 election: Requested vote from peers 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401), 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237)
I20260430 07:53:53.353747   600 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "5c76e8da6f20412c8d43c20ebd6bf579" candidate_uuid: "8d39b7f876294f979df191c0eb53602e" candidate_term: 2 candidate_status { last_received { term: 1 index: 17 } } ignore_live_leader: true dest_uuid: "6e78d688969848c88f7579ee4d45ac3b"
I20260430 07:53:53.353971   600 raft_consensus.cc:3060] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:53:53.356429   600 raft_consensus.cc:2468] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 8d39b7f876294f979df191c0eb53602e in term 2.
I20260430 07:53:53.357157   797 leader_election.cc:304] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6e78d688969848c88f7579ee4d45ac3b, 8d39b7f876294f979df191c0eb53602e; no voters: 
I20260430 07:53:53.358420   938 raft_consensus.cc:2804] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 2 FOLLOWER]: Leader election won for term 2
I20260430 07:53:53.361799   938 raft_consensus.cc:697] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [term 2 LEADER]: Becoming Leader. State: Replica: 8d39b7f876294f979df191c0eb53602e, State: Running, Role: LEADER
I20260430 07:53:53.362486   938 consensus_queue.cc:237] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 17, Committed index: 17, Last appended: 1.17, Last appended by leader: 17, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } }
I20260430 07:53:53.366999   420 tablet_copy-itest.cc:232] successfully elected new leader
I20260430 07:53:53.368176   454 catalog_manager.cc:5671] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e reported cstate change: term changed from 1 to 2, leader changed from 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2) to 8d39b7f876294f979df191c0eb53602e (127.0.105.3). New cstate: current_term: 2 leader_uuid: "8d39b7f876294f979df191c0eb53602e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "01ce1dae9db0472eb9a08ac196bde0cc" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36401 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8d39b7f876294f979df191c0eb53602e" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 43025 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 } health_report { overall_health: UNKNOWN } } }
I20260430 07:53:53.370632   420 cluster_itest_util.cc:258] Not converged past 18 yet: 2.18 1.17
I20260430 07:53:53.474601   420 cluster_itest_util.cc:258] Not converged past 18 yet: 2.18 1.17
I20260430 07:53:53.678766   420 cluster_itest_util.cc:258] Not converged past 18 yet: 2.18 1.17
I20260430 07:53:53.963308   600 raft_consensus.cc:1275] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 2 FOLLOWER]: Refusing update from remote peer 8d39b7f876294f979df191c0eb53602e: Log matching property violated. Preceding OpId in replica: term: 1 index: 17. Preceding OpId from leader: term: 2 index: 18. (index mismatch)
I20260430 07:53:53.964217   938 consensus_queue.cc:1048] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [LEADER]: Connected to new peer: Peer: permanent_uuid: "6e78d688969848c88f7579ee4d45ac3b" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 41237 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 18, Last known committed idx: 17, Time since last communication: 0.000s
I20260430 07:53:53.981962   420 tablet_copy-itest.cc:244] restarting workload...
W20260430 07:53:54.899318   797 proxy.cc:239] Call had error, refreshing address and retrying: Timed out: connection negotiation to 127.0.105.2:36401 for RPC RequestConsensusVote timed out after 1.553s (ON_OUTBOUND_QUEUE)
W20260430 07:53:56.455397   797 leader_election.cc:336] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401): Timed out: connection negotiation to 127.0.105.2:36401 for RPC RequestConsensusVote timed out after 1.553s (ON_OUTBOUND_QUEUE)
W20260430 07:53:56.550289   967 negotiation.cc:336] Failed RPC negotiation. Trace:
0430 07:53:53.344916 (+     0us) reactor.cc:678] Submitting negotiation task for client connection to 127.0.105.2:36401 (local address 127.0.105.3:33601)
0430 07:53:53.345852 (+   936us) negotiation.cc:107] Waiting for socket to connect
0430 07:53:53.345876 (+    24us) client_negotiation.cc:175] Beginning negotiation
0430 07:53:53.346045 (+   169us) client_negotiation.cc:253] Sending NEGOTIATE NegotiatePB request
0430 07:53:56.549528 (+3203483us) negotiation.cc:326] Negotiation complete: Timed out: Client connection negotiation failed: client connection to 127.0.105.2:36401: received 0 of 4 requested bytes
Metrics: {"client-negotiator.queue_time_us":758,"thread_start_us":493,"threads_started":1}
W20260430 07:53:58.992727   918 meta_cache.cc:302] tablet 5c76e8da6f20412c8d43c20ebd6bf579: replica 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401) has failed: Timed out: Write RPC to 127.0.105.2:36401 timed out after 5.000s (SENT)
W20260430 07:53:58.993000   918 batcher.cc:441] Timed out: Failed to write batch of 50 ops to tablet 5c76e8da6f20412c8d43c20ebd6bf579 after 1 attempt(s): Failed to write to server: 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401): Write RPC to 127.0.105.2:36401 timed out after 5.000s (SENT)
W20260430 07:53:59.621598   967 negotiation.cc:336] Failed RPC negotiation. Trace:
0430 07:53:56.551742 (+     0us) reactor.cc:678] Submitting negotiation task for client connection to 127.0.105.2:36401 (local address 127.0.105.3:45821)
0430 07:53:56.551895 (+   153us) negotiation.cc:107] Waiting for socket to connect
0430 07:53:56.551918 (+    23us) client_negotiation.cc:175] Beginning negotiation
0430 07:53:56.552092 (+   174us) client_negotiation.cc:253] Sending NEGOTIATE NegotiatePB request
0430 07:53:59.621394 (+3069302us) negotiation.cc:326] Negotiation complete: Timed out: Client connection negotiation failed: client connection to 127.0.105.2:36401: received 0 of 4 requested bytes
Metrics: {"client-negotiator.queue_time_us":37}
W20260430 07:53:59.622161   797 consensus_peers.cc:597] T 5c76e8da6f20412c8d43c20ebd6bf579 P 8d39b7f876294f979df191c0eb53602e -> Peer 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401): Couldn't send request to peer 01ce1dae9db0472eb9a08ac196bde0cc. Status: Timed out: Client connection negotiation failed: client connection to 127.0.105.2:36401: received 0 of 4 requested bytes. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:00.029939   420 tablet_copy-itest.cc:253] shutting down new leader8d39b7f876294f979df191c0eb53602e...
I20260430 07:54:00.030270   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 780
I20260430 07:54:00.032218   902 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:00.153743   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 780
I20260430 07:54:00.177402   420 tablet_copy-itest.cc:256] tombstoning original follower...
I20260430 07:54:00.178640   580 tablet_service.cc:1558] Processing DeleteTablet for tablet 5c76e8da6f20412c8d43c20ebd6bf579 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:55580
I20260430 07:54:00.179813  1005 tablet_replica.cc:333] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: stopping tablet replica
I20260430 07:54:00.180326  1005 raft_consensus.cc:2243] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 2 FOLLOWER]: Raft consensus shutting down.
I20260430 07:54:00.182773  1005 raft_consensus.cc:2272] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b [term 2 FOLLOWER]: Raft consensus is shut down!
I20260430 07:54:00.184748  1005 ts_tablet_manager.cc:1916] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:54:00.190687  1005 ts_tablet_manager.cc:1929] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.170
I20260430 07:54:00.190866  1005 log.cc:1199] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestRejectRogueLeader.1777535629930685-420-0/minicluster-data/ts-0/wal/wals/5c76e8da6f20412c8d43c20ebd6bf579
I20260430 07:54:00.192368   420 tablet_copy-itest.cc:261] unpausing old (rogue) leader 01ce1dae9db0472eb9a08ac196bde0cc...
W20260430 07:54:00.216130  1008 negotiation.cc:336] Failed RPC negotiation. Trace:
0430 07:54:00.197901 (+     0us) reactor.cc:678] Submitting negotiation task for server connection from 127.0.105.3:45821 (local address 127.0.105.2:36401)
0430 07:54:00.214018 (+ 16117us) server_negotiation.cc:207] Beginning negotiation
0430 07:54:00.214026 (+     8us) server_negotiation.cc:400] Waiting for connection header
0430 07:54:00.214223 (+   197us) server_negotiation.cc:408] Connection header received
0430 07:54:00.214318 (+    95us) server_negotiation.cc:366] Received NEGOTIATE NegotiatePB request
0430 07:54:00.214324 (+     6us) server_negotiation.cc:462] Received NEGOTIATE request from client
0430 07:54:00.214410 (+    86us) server_negotiation.cc:378] Sending NEGOTIATE NegotiatePB response
0430 07:54:00.214817 (+   407us) negotiation.cc:326] Negotiation complete: Network error: Server connection negotiation failed: server connection from 127.0.105.3:45821: BlockingWrite error: write error: Broken pipe (error 32)
Metrics: {"server-negotiator.queue_time_us":15956,"threads_started":1}
W20260430 07:54:00.217109  1008 negotiation.cc:336] Failed RPC negotiation. Trace:
0430 07:54:00.199320 (+     0us) reactor.cc:678] Submitting negotiation task for server connection from 127.0.105.3:33601 (local address 127.0.105.2:36401)
0430 07:54:00.216427 (+ 17107us) server_negotiation.cc:207] Beginning negotiation
0430 07:54:00.216434 (+     7us) server_negotiation.cc:400] Waiting for connection header
0430 07:54:00.216458 (+    24us) server_negotiation.cc:408] Connection header received
0430 07:54:00.216534 (+    76us) server_negotiation.cc:366] Received NEGOTIATE NegotiatePB request
0430 07:54:00.216540 (+     6us) server_negotiation.cc:462] Received NEGOTIATE request from client
0430 07:54:00.216611 (+    71us) server_negotiation.cc:378] Sending NEGOTIATE NegotiatePB response
0430 07:54:00.216819 (+   208us) negotiation.cc:326] Negotiation complete: Network error: Server connection negotiation failed: server connection from 127.0.105.3:33601: BlockingWrite error: write error: Broken pipe (error 32)
Metrics: {"server-negotiator.queue_time_us":16976,"thread_start_us":14365,"threads_started":1}
W20260430 07:54:00.218029  1008 negotiation.cc:336] Failed RPC negotiation. Trace:
0430 07:54:00.208029 (+     0us) reactor.cc:678] Submitting negotiation task for server connection from 127.0.105.3:50543 (local address 127.0.105.2:36401)
0430 07:54:00.217399 (+  9370us) server_negotiation.cc:207] Beginning negotiation
0430 07:54:00.217404 (+     5us) server_negotiation.cc:400] Waiting for connection header
0430 07:54:00.217422 (+    18us) server_negotiation.cc:408] Connection header received
0430 07:54:00.217499 (+    77us) server_negotiation.cc:366] Received NEGOTIATE NegotiatePB request
0430 07:54:00.217509 (+    10us) server_negotiation.cc:462] Received NEGOTIATE request from client
0430 07:54:00.217593 (+    84us) server_negotiation.cc:378] Sending NEGOTIATE NegotiatePB response
0430 07:54:00.217786 (+   193us) negotiation.cc:326] Negotiation complete: Network error: Server connection negotiation failed: server connection from 127.0.105.3:50543: BlockingWrite error: write error: Broken pipe (error 32)
Metrics: {"server-negotiator.queue_time_us":9286,"threads_started":1}
W20260430 07:54:00.218336   666 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.105.3:43025: connect: Connection refused (error 111)
W20260430 07:54:00.220311   666 consensus_peers.cc:597] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Couldn't send request to peer 6e78d688969848c88f7579ee4d45ac3b. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
W20260430 07:54:00.221877   666 consensus_peers.cc:597] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 8d39b7f876294f979df191c0eb53602e (127.0.105.3:43025): Couldn't send request to peer 8d39b7f876294f979df191c0eb53602e. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.3:43025: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20260430 07:54:00.760037  1013 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:00.761507   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:01.185112  1013 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:01.186108   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:01.731154  1014 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:01.731976   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:02.249110  1015 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:02.250089   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:02.524084   666 consensus_peers.cc:597] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 8d39b7f876294f979df191c0eb53602e (127.0.105.3:43025): Couldn't send request to peer 8d39b7f876294f979df191c0eb53602e. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.3:43025: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
W20260430 07:54:02.704167  1015 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:02.705142   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:03.172902  1015 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:03.173537   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:03.586402  1015 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:03.587244   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:04.180195  1020 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:04.180946   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:04.558530  1020 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:04.559094   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:04.960017  1020 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:04.960834   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:05.144526   666 consensus_peers.cc:597] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 8d39b7f876294f979df191c0eb53602e (127.0.105.3:43025): Couldn't send request to peer 8d39b7f876294f979df191c0eb53602e. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.3:43025: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20260430 07:54:05.195605   420 tablet_copy-itest.cc:273] the rogue leader was not able to tablet copy the tombstoned follower
I20260430 07:54:05.197120  1020 ts_tablet_manager.cc:933] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Initiating tablet copy from peer 01ce1dae9db0472eb9a08ac196bde0cc (127.0.105.2:36401)
I20260430 07:54:05.198770  1020 tablet_copy_client.cc:323] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.2:36401
I20260430 07:54:05.206736   751 tablet_copy_service.cc:140] P 01ce1dae9db0472eb9a08ac196bde0cc: Received BeginTabletCopySession request for tablet 5c76e8da6f20412c8d43c20ebd6bf579 from peer 6e78d688969848c88f7579ee4d45ac3b ({username='slave'} at 127.0.105.1:50491)
I20260430 07:54:05.207000   751 tablet_copy_service.cc:161] P 01ce1dae9db0472eb9a08ac196bde0cc: Beginning new tablet copy session on tablet 5c76e8da6f20412c8d43c20ebd6bf579 from peer 6e78d688969848c88f7579ee4d45ac3b at {username='slave'} at 127.0.105.1:50491: session id = 6e78d688969848c88f7579ee4d45ac3b-5c76e8da6f20412c8d43c20ebd6bf579
I20260430 07:54:05.209766   751 tablet_copy_source_session.cc:215] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc: Tablet Copy: opened 0 blocks and 1 log segments
W20260430 07:54:05.493770  1020 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:05.494964   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:05.694483   666 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.105.3:43025: connect: Connection refused (error 111) [suppressed 10 similar messages]
W20260430 07:54:06.105999  1028 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:06.106789   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:06.490321  1028 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:06.491612   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:07.116314  1031 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:07.117270   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:07.608132  1031 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:07.608984   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:08.045012  1031 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:08.045855   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:08.481462  1031 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:08.482494   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:08.781476   666 consensus_peers.cc:597] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 8d39b7f876294f979df191c0eb53602e (127.0.105.3:43025): Couldn't send request to peer 8d39b7f876294f979df191c0eb53602e. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.3:43025: connect: Connection refused (error 111). This is attempt 16: this message will repeat every 5th retry.
W20260430 07:54:09.099545  1034 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:09.100374   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:09.579403  1034 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:09.580320   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
W20260430 07:54:09.965361  1034 ts_tablet_manager.cc:732] T 5c76e8da6f20412c8d43c20ebd6bf579 P 6e78d688969848c88f7579ee4d45ac3b: Tablet Copy: Invalid argument: Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request
W20260430 07:54:09.966176   666 consensus_peers.cc:576] T 5c76e8da6f20412c8d43c20ebd6bf579 P 01ce1dae9db0472eb9a08ac196bde0cc -> Peer 6e78d688969848c88f7579ee4d45ac3b (127.0.105.1:41237): Unable to start Tablet Copy on peer: error { code: INVALID_CONFIG status { code: INVALID_ARGUMENT message: "Leader has replica of tablet 5c76e8da6f20412c8d43c20ebd6bf579 with term 1, which is lower than last-logged term 2 on local replica. Rejecting tablet copy request" } }
I20260430 07:54:10.226744   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 518
I20260430 07:54:10.229568   640 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:10.356349   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 518
I20260430 07:54:10.401263   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 649
I20260430 07:54:10.402563   771 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:10.506393   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 649
I20260430 07:54:10.527150   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 428
I20260430 07:54:10.528378   489 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:10.648547   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 428
2026-04-30T07:54:10Z chronyd exiting
[       OK ] TabletCopyITest.TestRejectRogueLeader (20675 ms)
[ RUN      ] TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest
2026-04-30T07:54:10Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2026-04-30T07:54:10Z Disabled control of system clock
I20260430 07:54:10.729583   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:38847
--webserver_interface=127.0.105.62
--webserver_port=0
--builtin_ntp_servers=127.0.105.20:44475
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:38847 with env {}
W20260430 07:54:11.126683  1043 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:11.127068  1043 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:11.127140  1043 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:11.136219  1043 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:54:11.136353  1043 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:11.136408  1043 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:54:11.136449  1043 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:54:11.147934  1043 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:44475
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:38847
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:38847
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:11.150276  1043 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:11.153029  1043 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:11.166074  1049 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:11.166152  1048 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:11.167101  1051 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:11.167690  1043 server_base.cc:1061] running on GCE node
I20260430 07:54:11.168753  1043 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:11.170571  1043 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:11.171880  1043 hybrid_clock.cc:648] HybridClock initialized: now 1777535651171813 us; error 65 us; skew 500 ppm
I20260430 07:54:11.172415  1043 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:11.176798  1043 webserver.cc:492] Webserver started at http://127.0.105.62:35741/ using document root <none> and password file <none>
I20260430 07:54:11.177876  1043 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:11.178210  1043 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:11.178701  1043 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:11.181551  1043 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data/instance:
uuid: "9416b36e8a7e46edaa84c3da39eb9c2e"
format_stamp: "Formatted at 2026-04-30 07:54:11 on dist-test-slave-1g5s"
I20260430 07:54:11.182512  1043 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal/instance:
uuid: "9416b36e8a7e46edaa84c3da39eb9c2e"
format_stamp: "Formatted at 2026-04-30 07:54:11 on dist-test-slave-1g5s"
I20260430 07:54:11.189205  1043 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.000s
I20260430 07:54:11.193485  1057 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:11.195433  1043 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.000s
I20260430 07:54:11.195684  1043 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "9416b36e8a7e46edaa84c3da39eb9c2e"
format_stamp: "Formatted at 2026-04-30 07:54:11 on dist-test-slave-1g5s"
I20260430 07:54:11.196067  1043 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:11.217163  1043 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:11.218273  1043 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:11.218636  1043 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:11.244089  1043 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:38847
I20260430 07:54:11.244055  1108 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:38847 every 8 connection(s)
I20260430 07:54:11.245766  1043 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:54:11.250854  1109 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:11.254902   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 1043
I20260430 07:54:11.255173   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/wal/instance
I20260430 07:54:11.265535  1109 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e: Bootstrap starting.
I20260430 07:54:11.271741  1109 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:11.273310  1109 log.cc:826] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:11.276602  1109 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e: No bootstrap required, opened a new log
I20260430 07:54:11.283450  1109 raft_consensus.cc:359] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 38847 } }
I20260430 07:54:11.283908  1109 raft_consensus.cc:385] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:11.284052  1109 raft_consensus.cc:740] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9416b36e8a7e46edaa84c3da39eb9c2e, State: Initialized, Role: FOLLOWER
I20260430 07:54:11.284744  1109 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 38847 } }
I20260430 07:54:11.284977  1109 raft_consensus.cc:399] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:11.285125  1109 raft_consensus.cc:493] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:11.285353  1109 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:11.287848  1109 raft_consensus.cc:515] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 38847 } }
I20260430 07:54:11.288416  1109 leader_election.cc:304] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9416b36e8a7e46edaa84c3da39eb9c2e; no voters: 
I20260430 07:54:11.289135  1109 leader_election.cc:290] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:11.289397  1114 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:11.290272  1114 raft_consensus.cc:697] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [term 1 LEADER]: Becoming Leader. State: Replica: 9416b36e8a7e46edaa84c3da39eb9c2e, State: Running, Role: LEADER
I20260430 07:54:11.290817  1114 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 38847 } }
I20260430 07:54:11.291702  1109 sys_catalog.cc:565] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:54:11.294432  1115 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 38847 } } }
I20260430 07:54:11.294811  1115 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:11.295148  1116 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9416b36e8a7e46edaa84c3da39eb9c2e. Latest consensus state: current_term: 1 leader_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9416b36e8a7e46edaa84c3da39eb9c2e" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 38847 } } }
I20260430 07:54:11.295818  1116 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:11.297215  1120 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:54:11.304286  1120 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:54:11.315708  1120 catalog_manager.cc:1357] Generated new cluster ID: 3ab37af0fd764fc292311ce5a453fcb3
I20260430 07:54:11.315874  1120 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:54:11.339182  1120 catalog_manager.cc:1380] Generated new certificate authority record
I20260430 07:54:11.341130  1120 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:54:11.350312  1120 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 9416b36e8a7e46edaa84c3da39eb9c2e: Generated new TSK 0
I20260430 07:54:11.351775  1120 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:54:11.370532   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:0
--local_ip_for_outbound_sockets=127.0.105.1
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--builtin_ntp_servers=127.0.105.20:44475
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20260430 07:54:11.797358  1133 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:11.797921  1133 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:11.798090  1133 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:11.808835  1133 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:11.809051  1133 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:54:11.821568  1133 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:44475
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:11.823561  1133 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:11.825855  1133 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:11.839148  1138 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:11.839223  1139 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:11.842347  1141 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:11.842763  1133 server_base.cc:1061] running on GCE node
I20260430 07:54:11.843573  1133 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:11.844767  1133 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:11.846043  1133 hybrid_clock.cc:648] HybridClock initialized: now 1777535651845938 us; error 96 us; skew 500 ppm
I20260430 07:54:11.846490  1133 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:11.849556  1133 webserver.cc:492] Webserver started at http://127.0.105.1:40559/ using document root <none> and password file <none>
I20260430 07:54:11.850561  1133 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:11.850725  1133 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:11.851099  1133 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:11.853830  1133 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data/instance:
uuid: "530ef53bc12a4d93b2eca797cd0b7281"
format_stamp: "Formatted at 2026-04-30 07:54:11 on dist-test-slave-1g5s"
I20260430 07:54:11.854710  1133 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal/instance:
uuid: "530ef53bc12a4d93b2eca797cd0b7281"
format_stamp: "Formatted at 2026-04-30 07:54:11 on dist-test-slave-1g5s"
I20260430 07:54:11.860955  1133 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.001s
I20260430 07:54:11.865497  1147 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:11.867297  1133 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20260430 07:54:11.867540  1133 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "530ef53bc12a4d93b2eca797cd0b7281"
format_stamp: "Formatted at 2026-04-30 07:54:11 on dist-test-slave-1g5s"
I20260430 07:54:11.867825  1133 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:11.884653  1133 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:11.886128  1133 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:11.887560  1133 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:11.889405  1133 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:11.891701  1133 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:11.891973  1133 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:11.892634  1133 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:11.892802  1133 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s	user 0.000s	sys 0.000s
I20260430 07:54:11.947075  1133 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:42567
I20260430 07:54:11.947134  1259 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:42567 every 8 connection(s)
I20260430 07:54:11.948859  1133 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:54:11.957966   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 1133
I20260430 07:54:11.958242   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/wal/instance
I20260430 07:54:11.963033   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:0
--local_ip_for_outbound_sockets=127.0.105.2
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--builtin_ntp_servers=127.0.105.20:44475
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20260430 07:54:11.977501  1260 heartbeater.cc:344] Connected to a master server at 127.0.105.62:38847
I20260430 07:54:11.978096  1260 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:11.979665  1260 heartbeater.cc:507] Master 127.0.105.62:38847 requested a full tablet report, sending...
I20260430 07:54:11.985478  1074 ts_manager.cc:194] Registered new tserver with Master: 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567)
I20260430 07:54:11.987768  1074 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:49507
W20260430 07:54:12.549885  1264 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:12.551028  1264 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:12.551784  1264 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:12.579972  1264 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:12.580446  1264 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:54:12.598199  1264 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:44475
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:12.600927  1264 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:12.603332  1264 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:12.621264  1269 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:12.621160  1270 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:12.621160  1272 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:12.622382  1264 server_base.cc:1061] running on GCE node
I20260430 07:54:12.623587  1264 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:12.625371  1264 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:12.628942  1264 hybrid_clock.cc:648] HybridClock initialized: now 1777535652628760 us; error 91 us; skew 500 ppm
I20260430 07:54:12.629508  1264 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:12.633462  1264 webserver.cc:492] Webserver started at http://127.0.105.2:34183/ using document root <none> and password file <none>
I20260430 07:54:12.634855  1264 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:12.635012  1264 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:12.635476  1264 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:12.640486  1264 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data/instance:
uuid: "155ec5af1cab42a6aed09cdaab8a150f"
format_stamp: "Formatted at 2026-04-30 07:54:12 on dist-test-slave-1g5s"
I20260430 07:54:12.642036  1264 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal/instance:
uuid: "155ec5af1cab42a6aed09cdaab8a150f"
format_stamp: "Formatted at 2026-04-30 07:54:12 on dist-test-slave-1g5s"
I20260430 07:54:12.652715  1264 fs_manager.cc:696] Time spent creating directory manager: real 0.010s	user 0.009s	sys 0.001s
I20260430 07:54:12.658537  1278 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:12.662401  1264 fs_manager.cc:730] Time spent opening block manager: real 0.006s	user 0.003s	sys 0.003s
I20260430 07:54:12.663129  1264 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "155ec5af1cab42a6aed09cdaab8a150f"
format_stamp: "Formatted at 2026-04-30 07:54:12 on dist-test-slave-1g5s"
I20260430 07:54:12.663396  1264 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:12.687825  1264 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:12.690055  1264 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:12.691046  1264 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:12.693960  1264 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:12.696384  1264 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:12.696545  1264 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:12.697655  1264 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:12.700362  1264 ts_tablet_manager.cc:595] Time spent register tablets: real 0.001s	user 0.000s	sys 0.000s
I20260430 07:54:12.816560  1264 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:39647
I20260430 07:54:12.816646  1390 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:39647 every 8 connection(s)
I20260430 07:54:12.824324  1264 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:54:12.829854   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 1264
I20260430 07:54:12.830068   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal/instance
I20260430 07:54:12.835371   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.3:0
--local_ip_for_outbound_sockets=127.0.105.3
--webserver_interface=127.0.105.3
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--builtin_ntp_servers=127.0.105.20:44475
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20260430 07:54:12.870718  1391 heartbeater.cc:344] Connected to a master server at 127.0.105.62:38847
I20260430 07:54:12.871578  1391 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:12.874332  1391 heartbeater.cc:507] Master 127.0.105.62:38847 requested a full tablet report, sending...
I20260430 07:54:12.879284  1074 ts_manager.cc:194] Registered new tserver with Master: 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647)
I20260430 07:54:12.880151  1074 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:46267
I20260430 07:54:12.993462  1260 heartbeater.cc:499] Master 127.0.105.62:38847 was elected leader, sending a full tablet report...
W20260430 07:54:13.418949  1395 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:13.419333  1395 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:13.419576  1395 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:13.430831  1395 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:13.431224  1395 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.3
I20260430 07:54:13.445214  1395 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:44475
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.105.3
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.3
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:13.447125  1395 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:13.449765  1395 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:13.461688  1401 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:13.462388  1403 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:13.462524  1400 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:13.463119  1395 server_base.cc:1061] running on GCE node
I20260430 07:54:13.463810  1395 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:13.465010  1395 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:13.466441  1395 hybrid_clock.cc:648] HybridClock initialized: now 1777535653466368 us; error 85 us; skew 500 ppm
I20260430 07:54:13.466971  1395 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:13.470530  1395 webserver.cc:492] Webserver started at http://127.0.105.3:45833/ using document root <none> and password file <none>
I20260430 07:54:13.471597  1395 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:13.471725  1395 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:13.472528  1395 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:13.475816  1395 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data/instance:
uuid: "89bc1171ed404f908d3b0dae9c36c49b"
format_stamp: "Formatted at 2026-04-30 07:54:13 on dist-test-slave-1g5s"
I20260430 07:54:13.476868  1395 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal/instance:
uuid: "89bc1171ed404f908d3b0dae9c36c49b"
format_stamp: "Formatted at 2026-04-30 07:54:13 on dist-test-slave-1g5s"
I20260430 07:54:13.484301  1395 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.005s	sys 0.000s
I20260430 07:54:13.489281  1409 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:13.491230  1395 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20260430 07:54:13.491412  1395 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal
uuid: "89bc1171ed404f908d3b0dae9c36c49b"
format_stamp: "Formatted at 2026-04-30 07:54:13 on dist-test-slave-1g5s"
I20260430 07:54:13.491704  1395 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:13.526317  1395 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:13.527475  1395 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:13.527815  1395 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:13.529094  1395 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:13.531225  1395 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:13.531335  1395 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:13.531530  1395 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:13.531597  1395 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:13.588697  1395 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.3:39471
I20260430 07:54:13.588815  1521 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.3:39471 every 8 connection(s)
I20260430 07:54:13.590405  1395 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
I20260430 07:54:13.600044   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 1395
I20260430 07:54:13.600322   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal/instance
I20260430 07:54:13.605446   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.4:0
--local_ip_for_outbound_sockets=127.0.105.4
--webserver_interface=127.0.105.4
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--builtin_ntp_servers=127.0.105.20:44475
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20260430 07:54:13.610460  1522 heartbeater.cc:344] Connected to a master server at 127.0.105.62:38847
I20260430 07:54:13.611195  1522 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:13.613131  1522 heartbeater.cc:507] Master 127.0.105.62:38847 requested a full tablet report, sending...
I20260430 07:54:13.616259  1074 ts_manager.cc:194] Registered new tserver with Master: 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471)
I20260430 07:54:13.617727  1074 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.3:47301
I20260430 07:54:13.887197  1391 heartbeater.cc:499] Master 127.0.105.62:38847 was elected leader, sending a full tablet report...
W20260430 07:54:14.040917  1526 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:14.041440  1526 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:14.041610  1526 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:14.051777  1526 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:14.051992  1526 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.4
I20260430 07:54:14.063505  1526 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:44475
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.4:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--webserver_interface=127.0.105.4
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.4
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:14.065164  1526 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:14.067169  1526 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:14.079581  1532 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:14.079830  1534 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:14.079922  1531 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:14.080926  1526 server_base.cc:1061] running on GCE node
I20260430 07:54:14.082258  1526 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:14.083917  1526 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:14.085809  1526 hybrid_clock.cc:648] HybridClock initialized: now 1777535654085676 us; error 122 us; skew 500 ppm
I20260430 07:54:14.086259  1526 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:14.091809  1526 webserver.cc:492] Webserver started at http://127.0.105.4:45675/ using document root <none> and password file <none>
I20260430 07:54:14.093788  1526 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:14.094000  1526 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:14.094676  1526 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:14.098723  1526 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data/instance:
uuid: "05ac35aa75a24aa7affb7d3e3645707f"
format_stamp: "Formatted at 2026-04-30 07:54:14 on dist-test-slave-1g5s"
I20260430 07:54:14.099776  1526 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal/instance:
uuid: "05ac35aa75a24aa7affb7d3e3645707f"
format_stamp: "Formatted at 2026-04-30 07:54:14 on dist-test-slave-1g5s"
I20260430 07:54:14.109118  1526 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.009s	sys 0.000s
I20260430 07:54:14.113745  1540 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:14.115774  1526 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.000s
I20260430 07:54:14.116014  1526 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal
uuid: "05ac35aa75a24aa7affb7d3e3645707f"
format_stamp: "Formatted at 2026-04-30 07:54:14 on dist-test-slave-1g5s"
I20260430 07:54:14.116273  1526 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:14.141717  1526 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:14.142931  1526 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:14.144030  1526 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:14.146047  1526 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:14.148612  1526 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:14.148907  1526 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:14.149050  1526 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:14.149204  1526 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.001s	sys 0.000s
I20260430 07:54:14.212613  1526 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.4:45799
I20260430 07:54:14.212709  1652 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.4:45799 every 8 connection(s)
I20260430 07:54:14.215742  1526 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
I20260430 07:54:14.221814   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 1526
I20260430 07:54:14.222036   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal/instance
I20260430 07:54:14.226577   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.5:0
--local_ip_for_outbound_sockets=127.0.105.5
--webserver_interface=127.0.105.5
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--builtin_ntp_servers=127.0.105.20:44475
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20260430 07:54:14.235870  1653 heartbeater.cc:344] Connected to a master server at 127.0.105.62:38847
I20260430 07:54:14.236357  1653 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:14.237610  1653 heartbeater.cc:507] Master 127.0.105.62:38847 requested a full tablet report, sending...
I20260430 07:54:14.239547  1074 ts_manager.cc:194] Registered new tserver with Master: 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799)
I20260430 07:54:14.240631  1074 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.4:39415
I20260430 07:54:14.624044  1522 heartbeater.cc:499] Master 127.0.105.62:38847 was elected leader, sending a full tablet report...
W20260430 07:54:14.702571  1657 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:14.703125  1657 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:14.703271  1657 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:14.714381  1657 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:14.714588  1657 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.5
I20260430 07:54:14.726766  1657 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:44475
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.5:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data/info.pb
--webserver_interface=127.0.105.5
--webserver_port=0
--tserver_master_addrs=127.0.105.62:38847
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.5
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:14.728479  1657 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:14.730525  1657 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:14.743672  1663 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:14.743700  1662 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:14.744602  1665 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:14.746497  1657 server_base.cc:1061] running on GCE node
I20260430 07:54:14.747645  1657 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:14.749068  1657 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:14.750464  1657 hybrid_clock.cc:648] HybridClock initialized: now 1777535654750406 us; error 54 us; skew 500 ppm
I20260430 07:54:14.750882  1657 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:14.754245  1657 webserver.cc:492] Webserver started at http://127.0.105.5:36687/ using document root <none> and password file <none>
I20260430 07:54:14.755215  1657 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:14.755918  1657 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:14.756330  1657 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:14.759153  1657 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data/instance:
uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
format_stamp: "Formatted at 2026-04-30 07:54:14 on dist-test-slave-1g5s"
I20260430 07:54:14.760105  1657 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal/instance:
uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
format_stamp: "Formatted at 2026-04-30 07:54:14 on dist-test-slave-1g5s"
I20260430 07:54:14.767199  1657 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.007s	sys 0.001s
I20260430 07:54:14.772533  1671 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:14.774816  1657 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.001s	sys 0.001s
I20260430 07:54:14.775005  1657 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal
uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
format_stamp: "Formatted at 2026-04-30 07:54:14 on dist-test-slave-1g5s"
I20260430 07:54:14.775246  1657 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:14.802567  1657 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:14.803575  1657 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:14.803877  1657 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:14.805099  1657 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:14.807067  1657 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:14.807174  1657 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:14.807257  1657 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:14.807327  1657 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:14.876674  1657 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.5:34847
I20260430 07:54:14.876744  1783 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.5:34847 every 8 connection(s)
I20260430 07:54:14.878515  1657 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data/info.pb
I20260430 07:54:14.880533   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 1657
I20260430 07:54:14.880774   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal/instance
I20260430 07:54:14.898773  1784 heartbeater.cc:344] Connected to a master server at 127.0.105.62:38847
I20260430 07:54:14.899221  1784 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:14.900298  1784 heartbeater.cc:507] Master 127.0.105.62:38847 requested a full tablet report, sending...
I20260430 07:54:14.902586  1073 ts_manager.cc:194] Registered new tserver with Master: 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:14.903584  1073 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.5:39041
I20260430 07:54:14.914333   420 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20260430 07:54:14.959847  1073 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:49060:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 5
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20260430 07:54:14.961519  1073 catalog_manager.cc:7033] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 6 tablet servers would be needed, 5 are available. Consider bringing up more tablet servers.
I20260430 07:54:15.029069  1326 tablet_service.cc:1511] Processing CreateTablet for tablet 40d7739e889e4609ac09c7384fa25c88 (DEFAULT_TABLE table=test-workload [id=98919faf7f4347f1b945d0c425317cc9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:15.032325  1326 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:15.038208  1457 tablet_service.cc:1511] Processing CreateTablet for tablet 40d7739e889e4609ac09c7384fa25c88 (DEFAULT_TABLE table=test-workload [id=98919faf7f4347f1b945d0c425317cc9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:15.040531  1457 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:15.045506  1195 tablet_service.cc:1511] Processing CreateTablet for tablet 40d7739e889e4609ac09c7384fa25c88 (DEFAULT_TABLE table=test-workload [id=98919faf7f4347f1b945d0c425317cc9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:15.047387  1195 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:15.047142  1719 tablet_service.cc:1511] Processing CreateTablet for tablet 40d7739e889e4609ac09c7384fa25c88 (DEFAULT_TABLE table=test-workload [id=98919faf7f4347f1b945d0c425317cc9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:15.049209  1719 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:15.049984  1588 tablet_service.cc:1511] Processing CreateTablet for tablet 40d7739e889e4609ac09c7384fa25c88 (DEFAULT_TABLE table=test-workload [id=98919faf7f4347f1b945d0c425317cc9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:15.052078  1588 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:15.078377  1812 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Bootstrap starting.
I20260430 07:54:15.089227  1815 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Bootstrap starting.
I20260430 07:54:15.089540  1812 tablet_bootstrap.cc:654] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:15.091766  1812 log.cc:826] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:15.094588  1814 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap starting.
I20260430 07:54:15.095896  1813 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Bootstrap starting.
I20260430 07:54:15.099109  1814 tablet_bootstrap.cc:654] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:15.100132  1813 tablet_bootstrap.cc:654] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:15.100971  1814 log.cc:826] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:15.102947  1813 log.cc:826] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:15.103899  1816 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap starting.
I20260430 07:54:15.104530  1815 tablet_bootstrap.cc:654] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:15.106293  1815 log.cc:826] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:15.108162  1812 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: No bootstrap required, opened a new log
I20260430 07:54:15.108721  1812 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Time spent bootstrapping tablet: real 0.031s	user 0.007s	sys 0.007s
I20260430 07:54:15.111665  1816 tablet_bootstrap.cc:654] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:15.111522  1813 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: No bootstrap required, opened a new log
I20260430 07:54:15.112447  1813 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Time spent bootstrapping tablet: real 0.019s	user 0.009s	sys 0.003s
I20260430 07:54:15.114910  1816 log.cc:826] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:15.118860  1812 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.119982  1812 raft_consensus.cc:385] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:15.120182  1812 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 89bc1171ed404f908d3b0dae9c36c49b, State: Initialized, Role: FOLLOWER
I20260430 07:54:15.120200  1814 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: No bootstrap required, opened a new log
I20260430 07:54:15.120571  1814 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Time spent bootstrapping tablet: real 0.026s	user 0.013s	sys 0.009s
I20260430 07:54:15.121126  1812 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.121444  1815 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: No bootstrap required, opened a new log
I20260430 07:54:15.121785  1815 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Time spent bootstrapping tablet: real 0.033s	user 0.012s	sys 0.002s
I20260430 07:54:15.122798  1813 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.123886  1813 raft_consensus.cc:385] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:15.124042  1813 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 530ef53bc12a4d93b2eca797cd0b7281, State: Initialized, Role: FOLLOWER
I20260430 07:54:15.125005  1813 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.129904  1814 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.130635  1814 raft_consensus.cc:385] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:15.131173  1814 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 05ac35aa75a24aa7affb7d3e3645707f, State: Initialized, Role: FOLLOWER
I20260430 07:54:15.132268  1814 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.133829  1816 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: No bootstrap required, opened a new log
I20260430 07:54:15.134402  1816 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Time spent bootstrapping tablet: real 0.031s	user 0.008s	sys 0.005s
I20260430 07:54:15.134902  1813 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Time spent starting tablet: real 0.022s	user 0.013s	sys 0.004s
I20260430 07:54:15.136442  1653 heartbeater.cc:499] Master 127.0.105.62:38847 was elected leader, sending a full tablet report...
I20260430 07:54:15.140364  1814 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Time spent starting tablet: real 0.019s	user 0.003s	sys 0.011s
I20260430 07:54:15.143210  1815 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.144028  1815 raft_consensus.cc:385] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:15.144196  1815 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 155ec5af1cab42a6aed09cdaab8a150f, State: Initialized, Role: FOLLOWER
I20260430 07:54:15.145402  1815 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.151470  1812 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Time spent starting tablet: real 0.042s	user 0.021s	sys 0.018s
I20260430 07:54:15.151705  1816 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.152208  1816 raft_consensus.cc:385] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:15.152364  1816 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9fcc3716c75543efaca7f7f4f0aec6fc, State: Initialized, Role: FOLLOWER
I20260430 07:54:15.153216  1816 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.154179  1815 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Time spent starting tablet: real 0.032s	user 0.022s	sys 0.001s
I20260430 07:54:15.157486  1784 heartbeater.cc:499] Master 127.0.105.62:38847 was elected leader, sending a full tablet report...
I20260430 07:54:15.158793  1816 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Time spent starting tablet: real 0.024s	user 0.006s	sys 0.009s
I20260430 07:54:15.182461  1824 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:54:15.182765  1824 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.185037  1824 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:15.198882  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:15.199513  1477 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 0.
W20260430 07:54:15.207121  1261 tablet.cc:2404] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:54:15.208405  1215 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281" is_pre_election: true
I20260430 07:54:15.208406  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:15.208981  1215 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 0.
I20260430 07:54:15.208981  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 0.
I20260430 07:54:15.209084  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:15.209671  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 0.
I20260430 07:54:15.209692  1542 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 89bc1171ed404f908d3b0dae9c36c49b, 9fcc3716c75543efaca7f7f4f0aec6fc; no voters: 
I20260430 07:54:15.210541  1824 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Leader pre-election won for term 1
I20260430 07:54:15.210754  1824 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:54:15.210884  1824 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:15.214883  1824 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.216221  1824 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 1 election: Requested vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:15.216527  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b"
I20260430 07:54:15.216548  1215 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281"
I20260430 07:54:15.216792  1477 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:15.216817  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f"
I20260430 07:54:15.217006  1346 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:15.217423  1215 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:15.217940  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:15.218174  1739 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 0 FOLLOWER]: Advancing to term 1
W20260430 07:54:15.220180  1654 tablet.cc:2404] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:54:15.220824  1477 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 1.
I20260430 07:54:15.221103  1215 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 1.
I20260430 07:54:15.221896  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 1.
I20260430 07:54:15.221900  1542 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 530ef53bc12a4d93b2eca797cd0b7281, 89bc1171ed404f908d3b0dae9c36c49b; no voters: 
I20260430 07:54:15.222124  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 1.
I20260430 07:54:15.222546  1824 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:15.223106  1824 raft_consensus.cc:697] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 LEADER]: Becoming Leader. State: Replica: 05ac35aa75a24aa7affb7d3e3645707f, State: Running, Role: LEADER
I20260430 07:54:15.223729  1824 consensus_queue.cc:237] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:15.232249  1072 catalog_manager.cc:5671] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f reported cstate change: term changed from 0 to 1, leader changed from <none> to 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4). New cstate: current_term: 1 leader_uuid: "05ac35aa75a24aa7affb7d3e3645707f" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } health_report { overall_health: UNKNOWN } } }
I20260430 07:54:15.313591   420 tablet_copy-itest.cc:761] Iteration 1
W20260430 07:54:15.336690  1392 tablet.cc:2404] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20260430 07:54:15.344484  1523 tablet.cc:2404] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:54:15.354187  1739 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:54:15.354192  1477 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:54:15.354187  1346 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:54:15.354187  1215 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 1 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:54:15.355912  1838 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20260430 07:54:15.357008  1824 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:54:15.357525  1831 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:54:15.357997  1837 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20260430 07:54:15.381456  1785 tablet.cc:2404] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:54:15.396633  1836 mvcc.cc:204] Tried to move back new op lower bound from 7280786044312784896 to 7280786043805126656. Current Snapshot: MvccSnapshot[applied={T|T < 7280786044312784896}]
I20260430 07:54:15.409066  1841 mvcc.cc:204] Tried to move back new op lower bound from 7280786044312784896 to 7280786043805126656. Current Snapshot: MvccSnapshot[applied={T|T < 7280786044312784896}]
I20260430 07:54:15.419559  1839 mvcc.cc:204] Tried to move back new op lower bound from 7280786044312784896 to 7280786043805126656. Current Snapshot: MvccSnapshot[applied={T|T < 7280786044312784896}]
I20260430 07:54:15.643167   420 tablet_copy-itest.cc:780] Tombstoning follower tablet 40d7739e889e4609ac09c7384fa25c88 on TS 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:15.644697  1719 tablet_service.cc:1558] Processing DeleteTablet for tablet 40d7739e889e4609ac09c7384fa25c88 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:42396
I20260430 07:54:15.646602  1854 tablet_replica.cc:333] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: stopping tablet replica
I20260430 07:54:15.648394  1854 raft_consensus.cc:2243] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Raft consensus shutting down.
I20260430 07:54:15.649539  1854 pending_rounds.cc:70] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Trying to abort 1 pending ops.
I20260430 07:54:15.649720  1854 pending_rounds.cc:77] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Aborting op as it isn't in flight: id { term: 1 index: 22 } timestamp: 7280786045513240576 op_type: WRITE_OP write_request { tablet_id: "40d7739e889e4609ac09c7384fa25c88" schema { columns { name: "key" type: INT32 is_key: true is_nullable: false immutable: false } columns { name: "int_val" type: INT32 is_key: false is_nullable: false immutable: false } columns { name: "string_val" type: STRING is_key: false is_nullable: true immutable: false } } row_operations { rows: "<redacted>""\001\007\0007\352\302.u\224\257n\000\000\000\000\000\000\000\000\000@\000\000\000\000\000\000" indirect_data: "<redacted>""0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" } external_consistency_mode: CLIENT_PROPAGATED propagated_timestamp: 7280786045468712960 authz_token { token_data: "\010\323\227\314\317\006\"3\n\005slave\022*\n 98919faf7f4347f1b945d0c425317cc9\020\001\030\001 \001(\001" signature: "<redacted>""\033\025!\377)\347\240\306\264:3\301\370\0254\320\370}\246\341!>\010\211$\314\352\342w\261\212\036\242\243#\000v\222\343\240\177\357\267b\204\221\\\003\311\350p<\201\325~\210iR\nUM\366\006\242" signing_key_seq_num: 0 } } request_id { client_id: "5a255d4eee2b4744aec1f9b3e10df961" seq_no: 20 first_incomplete_seq_no: 20 attempt_no: 0 }
I20260430 07:54:15.650724  1854 raft_consensus.cc:2272] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Raft consensus is shut down!
W20260430 07:54:15.652690  1542 consensus_peers.cc:597] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f -> Peer 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847): Couldn't send request to peer 9fcc3716c75543efaca7f7f4f0aec6fc. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: STOPPING. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:15.653503  1854 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:54:15.660171  1854 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.22
I20260430 07:54:15.660418  1854 log.cc:1199] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal/wals/40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:16.693949  1856 ts_tablet_manager.cc:933] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Initiating tablet copy from peer 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799)
I20260430 07:54:16.695505  1856 tablet_copy_client.cc:323] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.4:45799
I20260430 07:54:16.729981  1628 tablet_copy_service.cc:140] P 05ac35aa75a24aa7affb7d3e3645707f: Received BeginTabletCopySession request for tablet 40d7739e889e4609ac09c7384fa25c88 from peer 9fcc3716c75543efaca7f7f4f0aec6fc ({username='slave'} at 127.0.105.5:40793)
I20260430 07:54:16.730444  1628 tablet_copy_service.cc:161] P 05ac35aa75a24aa7affb7d3e3645707f: Beginning new tablet copy session on tablet 40d7739e889e4609ac09c7384fa25c88 from peer 9fcc3716c75543efaca7f7f4f0aec6fc at {username='slave'} at 127.0.105.5:40793: session id = 9fcc3716c75543efaca7f7f4f0aec6fc-40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:16.739398  1628 tablet_copy_source_session.cc:215] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:16.743399  1856 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:54:16.752909  1856 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 1.22
I20260430 07:54:16.755865   420 tablet_copy-itest.cc:790] Tombstoning leader tablet 40d7739e889e4609ac09c7384fa25c88 on TS 05ac35aa75a24aa7affb7d3e3645707f
I20260430 07:54:16.757321  1856 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:16.757377  1588 tablet_service.cc:1558] Processing DeleteTablet for tablet 40d7739e889e4609ac09c7384fa25c88 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:60160
I20260430 07:54:16.758819  1860 tablet_replica.cc:333] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: stopping tablet replica
I20260430 07:54:16.762691  1856 tablet_copy_client.cc:806] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:16.763062  1856 tablet_copy_client.cc:670] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:16.764957  1860 raft_consensus.cc:2243] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 LEADER]: Raft consensus shutting down.
I20260430 07:54:16.768965  1860 pending_rounds.cc:70] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Trying to abort 1 pending ops.
I20260430 07:54:16.769325  1856 tablet_copy_client.cc:538] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:16.769274  1860 pending_rounds.cc:77] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Aborting op as it isn't in flight: id { term: 1 index: 107 } timestamp: 7280786050080329728 op_type: WRITE_OP write_request { tablet_id: "40d7739e889e4609ac09c7384fa25c88" schema { columns { name: "key" type: INT32 is_key: true is_nullable: false immutable: false } columns { name: "int_val" type: INT32 is_key: false is_nullable: false immutable: false } columns { name: "string_val" type: STRING is_key: false is_nullable: true immutable: false } } row_operations { rows: "<redacted>""\001\007\000@\325\360#\031\203C\256\000\000\000\000\000\000\000\000\000@\000\000\000\000\000\000" indirect_data: "<redacted>""0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" } external_consistency_mode: CLIENT_PROPAGATED propagated_timestamp: 7280786050030727168 authz_token { token_data: "\010\323\227\314\317\006\"3\n\005slave\022*\n 98919faf7f4347f1b945d0c425317cc9\020\001\030\001 \001(\001" signature: "<redacted>""\033\025!\377)\347\240\306\264:3\301\370\0254\320\370}\246\341!>\010\211$\314\352\342w\261\212\036\242\243#\000v\222\343\240\177\357\267b\204\221\\\003\311\350p<\201\325~\210iR\nUM\366\006\242" signing_key_seq_num: 0 } } request_id { client_id: "5a255d4eee2b4744aec1f9b3e10df961" seq_no: 105 first_incomplete_seq_no: 105 attempt_no: 0 }
W20260430 07:54:16.770027  1860 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:60162: Aborted: Op aborted
I20260430 07:54:16.771322  1860 raft_consensus.cc:2272] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: Raft consensus is shut down!
I20260430 07:54:16.773499  1856 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap starting.
I20260430 07:54:16.775004  1860 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:54:16.789069  1860 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.107
I20260430 07:54:16.789691  1860 log.cc:1199] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal/wals/40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:16.932842  1856 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap replayed 1/1 log segments. Stats: ops{read=105 overwritten=0 applied=104 ignored=0} inserts{seen=103 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20260430 07:54:16.933558  1856 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap complete.
I20260430 07:54:16.933890  1856 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Time spent bootstrapping tablet: real 0.161s	user 0.130s	sys 0.024s
I20260430 07:54:16.934887  1856 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:16.936470  1856 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9fcc3716c75543efaca7f7f4f0aec6fc, State: Initialized, Role: FOLLOWER
I20260430 07:54:16.937147  1856 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 104, Last appended: 1.105, Last appended by leader: 105, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:16.938843  1856 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Time spent starting tablet: real 0.005s	user 0.003s	sys 0.003s
I20260430 07:54:16.940264  1628 tablet_copy_service.cc:342] P 05ac35aa75a24aa7affb7d3e3645707f: Request end of tablet copy session 9fcc3716c75543efaca7f7f4f0aec6fc-40d7739e889e4609ac09c7384fa25c88 received from {username='slave'} at 127.0.105.5:40793
I20260430 07:54:16.940560  1628 tablet_copy_service.cc:434] P 05ac35aa75a24aa7affb7d3e3645707f: ending tablet copy session 9fcc3716c75543efaca7f7f4f0aec6fc-40d7739e889e4609ac09c7384fa25c88 on tablet 40d7739e889e4609ac09c7384fa25c88 with peer 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:18.262117  1867 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:18.262471  1867 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:18.264086  1868 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:18.264299  1868 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:18.265429  1867 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:18.285995  1868 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:18.296927  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:18.297278  1477 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 155ec5af1cab42a6aed09cdaab8a150f in term 1.
I20260430 07:54:18.299400  1873 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:18.300498  1873 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:18.307190  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:18.307595  1608 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: voting while tombstoned based on last-logged opid 1.107
I20260430 07:54:18.307834  1608 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 89bc1171ed404f908d3b0dae9c36c49b in term 1.
I20260430 07:54:18.315017  1873 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:18.319365  1214 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281" is_pre_election: true
I20260430 07:54:18.319677  1214 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 155ec5af1cab42a6aed09cdaab8a150f in term 1.
I20260430 07:54:18.321784  1280 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281, 89bc1171ed404f908d3b0dae9c36c49b; no voters: 
I20260430 07:54:18.326237  1867 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Leader pre-election won for term 2
I20260430 07:54:18.326437  1867 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Starting leader election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:18.326553  1867 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:18.328526  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:18.328812  1738 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 89bc1171ed404f908d3b0dae9c36c49b in term 1.
I20260430 07:54:18.329406  1411 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 89bc1171ed404f908d3b0dae9c36c49b, 9fcc3716c75543efaca7f7f4f0aec6fc; no voters: 
I20260430 07:54:18.329968  1868 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Leader pre-election won for term 2
I20260430 07:54:18.330319  1868 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Starting leader election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:18.330369  1867 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:18.330622  1868 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:18.330832  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:18.331050  1608 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: voting while tombstoned based on last-logged opid 1.107
I20260430 07:54:18.331514  1608 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 155ec5af1cab42a6aed09cdaab8a150f in term 1.
I20260430 07:54:18.331912  1867 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [CANDIDATE]: Term 2 election: Requested vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:18.332669  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b"
I20260430 07:54:18.333847  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f"
I20260430 07:54:18.334048  1608 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: voting while tombstoned based on last-logged opid 1.107
I20260430 07:54:18.334004  1214 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281"
I20260430 07:54:18.334178  1608 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:18.334221  1214 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:18.335538  1868 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:18.336062  1477 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 155ec5af1cab42a6aed09cdaab8a150f in current term 2: Already voted for candidate 89bc1171ed404f908d3b0dae9c36c49b in this term.
I20260430 07:54:18.337126  1868 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 2 election: Requested vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:18.337908  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:18.337859  1607 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f"
I20260430 07:54:18.338119  1738 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:18.339205  1214 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 155ec5af1cab42a6aed09cdaab8a150f in term 2.
I20260430 07:54:18.339454  1608 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 155ec5af1cab42a6aed09cdaab8a150f in term 2.
I20260430 07:54:18.341714  1280 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 4 responses out of 5 voters: 3 yes votes; 1 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281; no voters: 89bc1171ed404f908d3b0dae9c36c49b
I20260430 07:54:18.342130  1867 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: Leader election won for term 2
I20260430 07:54:18.342588  1738 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 89bc1171ed404f908d3b0dae9c36c49b in term 2.
I20260430 07:54:18.342615  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:18.342980  1477 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 530ef53bc12a4d93b2eca797cd0b7281 in current term 2: Already voted for candidate 89bc1171ed404f908d3b0dae9c36c49b in this term.
I20260430 07:54:18.343713  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:18.344038  1738 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 155ec5af1cab42a6aed09cdaab8a150f in current term 2: Already voted for candidate 89bc1171ed404f908d3b0dae9c36c49b in this term.
I20260430 07:54:18.344573  1214 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281" is_pre_election: true
I20260430 07:54:18.344944  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:18.345233  1738 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 155ec5af1cab42a6aed09cdaab8a150f in current term 2: Already voted for candidate 89bc1171ed404f908d3b0dae9c36c49b in this term.
I20260430 07:54:18.345151  1215 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281"
I20260430 07:54:18.345045  1214 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 89bc1171ed404f908d3b0dae9c36c49b in current term 2: Already voted for candidate 155ec5af1cab42a6aed09cdaab8a150f in this term.
I20260430 07:54:18.346334  1345 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:18.346536  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f"
I20260430 07:54:18.347457  1410 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 5 responses out of 5 voters: 2 yes votes; 3 no votes. yes voters: 89bc1171ed404f908d3b0dae9c36c49b, 9fcc3716c75543efaca7f7f4f0aec6fc; no voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281
I20260430 07:54:18.347898  1868 raft_consensus.cc:2749] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20260430 07:54:18.349663  1867 raft_consensus.cc:697] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 LEADER]: Becoming Leader. State: Replica: 155ec5af1cab42a6aed09cdaab8a150f, State: Running, Role: LEADER
I20260430 07:54:18.351016  1867 consensus_queue.cc:237] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 106, Committed index: 106, Last appended: 1.107, Last appended by leader: 106, Current term: 2, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:18.355520  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:18.355870  1738 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 530ef53bc12a4d93b2eca797cd0b7281 in current term 2: Already voted for candidate 89bc1171ed404f908d3b0dae9c36c49b in this term.
I20260430 07:54:18.356982  1345 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:18.357823  1148 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 4 responses out of 5 voters: 1 yes votes; 3 no votes. yes voters: 530ef53bc12a4d93b2eca797cd0b7281; no voters: 155ec5af1cab42a6aed09cdaab8a150f, 89bc1171ed404f908d3b0dae9c36c49b, 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:18.357972  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 2 candidate_status { last_received { term: 1 index: 107 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:18.358175  1608 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: voting while tombstoned based on last-logged opid 1.107
I20260430 07:54:18.358387  1608 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 530ef53bc12a4d93b2eca797cd0b7281 in current term 2: Already voted for candidate 155ec5af1cab42a6aed09cdaab8a150f in this term.
I20260430 07:54:18.358341  1873 raft_consensus.cc:2749] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20260430 07:54:18.358561  1072 catalog_manager.cc:5671] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f reported cstate change: term changed from 1 to 2, leader changed from 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4) to 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2). New cstate: current_term: 2 leader_uuid: "155ec5af1cab42a6aed09cdaab8a150f" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } health_report { overall_health: UNKNOWN } } }
W20260430 07:54:18.833462  1281 consensus_peers.cc:597] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f -> Peer 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799): Couldn't send request to peer 05ac35aa75a24aa7affb7d3e3645707f. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:18.904397  1738 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Refusing update from remote peer 155ec5af1cab42a6aed09cdaab8a150f: Log matching property violated. Preceding OpId in replica: term: 1 index: 105. Preceding OpId from leader: term: 2 index: 108. (index mismatch)
I20260430 07:54:18.905541  1867 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [LEADER]: Connected to new peer: Peer: permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 108, Last known committed idx: 104, Time since last communication: 0.000s
I20260430 07:54:18.941026  1477 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Refusing update from remote peer 155ec5af1cab42a6aed09cdaab8a150f: Log matching property violated. Preceding OpId in replica: term: 1 index: 107. Preceding OpId from leader: term: 2 index: 108. (index mismatch)
I20260430 07:54:18.942130  1867 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [LEADER]: Connected to new peer: Peer: permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 108, Last known committed idx: 106, Time since last communication: 0.000s
I20260430 07:54:18.954284  1214 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Refusing update from remote peer 155ec5af1cab42a6aed09cdaab8a150f: Log matching property violated. Preceding OpId in replica: term: 1 index: 107. Preceding OpId from leader: term: 2 index: 108. (index mismatch)
I20260430 07:54:18.955102  1906 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [LEADER]: Connected to new peer: Peer: permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 108, Last known committed idx: 106, Time since last communication: 0.000s
I20260430 07:54:18.961398  1909 rpcz_store.cc:275] Call kudu.tserver.TabletServerService.Write from 127.0.0.1:56904 (ReqId={client: 5a255d4eee2b4744aec1f9b3e10df961, seq_no=105, attempt_no=1}) took 2164 ms. Trace:
I20260430 07:54:18.961727  1909 rpcz_store.cc:276] 0430 07:54:16.796514 (+     0us) service_pool.cc:168] Inserting onto call queue
0430 07:54:16.796662 (+   148us) service_pool.cc:225] Handling call
0430 07:54:18.961310 (+2164648us) inbound_call.cc:177] Queueing success response
Metrics: {}
I20260430 07:54:18.970415   420 cluster_itest_util.cc:258] Not converged past 1 yet: 2.108 2.108 2.108 2.108 <uninitialized op>
I20260430 07:54:19.079465   420 cluster_itest_util.cc:258] Not converged past 1 yet: 2.108 2.108 2.108 2.108 <uninitialized op>
I20260430 07:54:19.289310   420 cluster_itest_util.cc:258] Not converged past 1 yet: 2.108 2.108 2.108 2.108 <uninitialized op>
I20260430 07:54:19.415092  1912 ts_tablet_manager.cc:933] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Initiating tablet copy from peer 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647)
I20260430 07:54:19.416126  1912 tablet_copy_client.cc:323] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.2:39647
I20260430 07:54:19.417634  1366 tablet_copy_service.cc:140] P 155ec5af1cab42a6aed09cdaab8a150f: Received BeginTabletCopySession request for tablet 40d7739e889e4609ac09c7384fa25c88 from peer 05ac35aa75a24aa7affb7d3e3645707f ({username='slave'} at 127.0.105.4:39079)
I20260430 07:54:19.417896  1366 tablet_copy_service.cc:161] P 155ec5af1cab42a6aed09cdaab8a150f: Beginning new tablet copy session on tablet 40d7739e889e4609ac09c7384fa25c88 from peer 05ac35aa75a24aa7affb7d3e3645707f at {username='slave'} at 127.0.105.4:39079: session id = 05ac35aa75a24aa7affb7d3e3645707f-40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:19.420754  1366 tablet_copy_source_session.cc:215] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:19.423395  1912 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:54:19.428155  1912 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 1.107
I20260430 07:54:19.428529  1912 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:19.431290  1912 tablet_copy_client.cc:806] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:19.431604  1912 tablet_copy_client.cc:670] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:19.435276  1912 tablet_copy_client.cc:538] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:19.439442  1912 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap starting.
I20260430 07:54:19.561530  1912 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap replayed 1/1 log segments. Stats: ops{read=108 overwritten=0 applied=108 ignored=0} inserts{seen=106 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:19.562104  1912 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap complete.
I20260430 07:54:19.562412  1912 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Time spent bootstrapping tablet: real 0.123s	user 0.116s	sys 0.008s
I20260430 07:54:19.563697  1912 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:19.564028  1912 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 05ac35aa75a24aa7affb7d3e3645707f, State: Initialized, Role: FOLLOWER
I20260430 07:54:19.564451  1912 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 108, Last appended: 2.108, Last appended by leader: 108, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:19.565297  1912 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Time spent starting tablet: real 0.003s	user 0.004s	sys 0.000s
I20260430 07:54:19.566499  1366 tablet_copy_service.cc:342] P 155ec5af1cab42a6aed09cdaab8a150f: Request end of tablet copy session 05ac35aa75a24aa7affb7d3e3645707f-40d7739e889e4609ac09c7384fa25c88 received from {username='slave'} at 127.0.105.4:39079
I20260430 07:54:19.566759  1366 tablet_copy_service.cc:434] P 155ec5af1cab42a6aed09cdaab8a150f: ending tablet copy session 05ac35aa75a24aa7affb7d3e3645707f-40d7739e889e4609ac09c7384fa25c88 on tablet 40d7739e889e4609ac09c7384fa25c88 with peer 05ac35aa75a24aa7affb7d3e3645707f
I20260430 07:54:19.596431   420 tablet_copy-itest.cc:761] Iteration 2
W20260430 07:54:19.605497  1175 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:56904: Illegal state: replica 530ef53bc12a4d93b2eca797cd0b7281 is not leader of this config: current role FOLLOWER
W20260430 07:54:19.619006  1437 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:52424: Illegal state: replica 89bc1171ed404f908d3b0dae9c36c49b is not leader of this config: current role FOLLOWER
I20260430 07:54:19.885972   420 tablet_copy-itest.cc:780] Tombstoning follower tablet 40d7739e889e4609ac09c7384fa25c88 on TS 89bc1171ed404f908d3b0dae9c36c49b
I20260430 07:54:19.886997  1457 tablet_service.cc:1558] Processing DeleteTablet for tablet 40d7739e889e4609ac09c7384fa25c88 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:49238
I20260430 07:54:19.890156  1934 tablet_replica.cc:333] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: stopping tablet replica
I20260430 07:54:19.891469  1934 raft_consensus.cc:2243] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Raft consensus shutting down.
W20260430 07:54:19.894310  1281 consensus_peers.cc:597] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f -> Peer 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471): Couldn't send request to peer 89bc1171ed404f908d3b0dae9c36c49b. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: STOPPING. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:19.896167  1934 pending_rounds.cc:70] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Trying to abort 1 pending ops.
I20260430 07:54:19.896993  1934 pending_rounds.cc:77] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Aborting op as it isn't in flight: id { term: 2 index: 129 } timestamp: 7280786062883393536 op_type: WRITE_OP write_request { tablet_id: "40d7739e889e4609ac09c7384fa25c88" schema { columns { name: "key" type: INT32 is_key: true is_nullable: false immutable: false } columns { name: "int_val" type: INT32 is_key: false is_nullable: false immutable: false } columns { name: "string_val" type: STRING is_key: false is_nullable: true immutable: false } } row_operations { rows: "<redacted>""\001\007\000+Rd\032\240\316yv\000\000\000\000\000\000\000\000\000@\000\000\000\000\000\000" indirect_data: "<redacted>""0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" } external_consistency_mode: CLIENT_PROPAGATED propagated_timestamp: 7280786062835666944 authz_token { token_data: "\010\327\227\314\317\006\"3\n\005slave\022*\n 98919faf7f4347f1b945d0c425317cc9\020\001\030\001 \001(\001" signature: "<redacted>""\002\247\333\310\"\035\212\203v\305BJcb\334\345\374U\006\233\024\350\320\372\230\365\246\232\242f.\266\364\004\215\340D\275\366?\357\013A\320ih&m\271~U\225\305\306\323\315\202?K\207P\210\313\231" signing_key_seq_num: 0 } } request_id { client_id: "5a255d4eee2b4744aec1f9b3e10df961" seq_no: 126 first_incomplete_seq_no: 126 attempt_no: 0 }
I20260430 07:54:19.898325  1934 raft_consensus.cc:2272] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Raft consensus is shut down!
I20260430 07:54:19.904646  1934 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:54:19.913380  1934 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.129
I20260430 07:54:19.913909  1934 log.cc:1199] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/wal/wals/40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:20.043429  1608 raft_consensus.cc:1217] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Deduplicated request from leader. Original: 1.107->[2.108-2.142]   Dedup: 2.108->[2.109-2.142]
I20260430 07:54:20.925140  1941 ts_tablet_manager.cc:933] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Initiating tablet copy from peer 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647)
I20260430 07:54:20.928040  1941 tablet_copy_client.cc:323] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.2:39647
I20260430 07:54:20.932138  1366 tablet_copy_service.cc:140] P 155ec5af1cab42a6aed09cdaab8a150f: Received BeginTabletCopySession request for tablet 40d7739e889e4609ac09c7384fa25c88 from peer 89bc1171ed404f908d3b0dae9c36c49b ({username='slave'} at 127.0.105.3:34905)
I20260430 07:54:20.932422  1366 tablet_copy_service.cc:161] P 155ec5af1cab42a6aed09cdaab8a150f: Beginning new tablet copy session on tablet 40d7739e889e4609ac09c7384fa25c88 from peer 89bc1171ed404f908d3b0dae9c36c49b at {username='slave'} at 127.0.105.3:34905: session id = 89bc1171ed404f908d3b0dae9c36c49b-40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:20.935973  1366 tablet_copy_source_session.cc:215] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:20.938867  1941 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:54:20.955009   420 tablet_copy-itest.cc:790] Tombstoning leader tablet 40d7739e889e4609ac09c7384fa25c88 on TS 155ec5af1cab42a6aed09cdaab8a150f
I20260430 07:54:20.960031  1326 tablet_service.cc:1558] Processing DeleteTablet for tablet 40d7739e889e4609ac09c7384fa25c88 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:41842
I20260430 07:54:20.961404  1943 tablet_replica.cc:333] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: stopping tablet replica
I20260430 07:54:20.967173  1943 raft_consensus.cc:2243] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 LEADER]: Raft consensus shutting down.
I20260430 07:54:20.970129  1943 raft_consensus.cc:2272] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: Raft consensus is shut down!
W20260430 07:54:20.970499  1175 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:56904: Illegal state: replica 530ef53bc12a4d93b2eca797cd0b7281 is not leader of this config: current role FOLLOWER
I20260430 07:54:20.972312  1943 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:54:20.980490  1943 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.219
I20260430 07:54:20.981026  1943 log.cc:1199] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/wal/wals/40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:20.981031  1941 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 2.129
I20260430 07:54:20.981536  1941 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:20.985668  1941 tablet_copy_client.cc:806] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:20.986009  1941 tablet_copy_client.cc:670] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: tablet copy: Starting download of 1 WAL segments...
W20260430 07:54:20.986589  1568 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:60162: Illegal state: replica 05ac35aa75a24aa7affb7d3e3645707f is not leader of this config: current role FOLLOWER
I20260430 07:54:20.990681  1941 tablet_copy_client.cc:538] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:20.995260  1941 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Bootstrap starting.
W20260430 07:54:21.012146  1699 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:53432: Illegal state: replica 9fcc3716c75543efaca7f7f4f0aec6fc is not leader of this config: current role FOLLOWER
I20260430 07:54:21.316411  1941 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Bootstrap replayed 1/1 log segments. Stats: ops{read=217 overwritten=0 applied=217 ignored=0} inserts{seen=215 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:21.316975  1941 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Bootstrap complete.
I20260430 07:54:21.317308  1941 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Time spent bootstrapping tablet: real 0.322s	user 0.279s	sys 0.045s
I20260430 07:54:21.318802  1941 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:21.319052  1941 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 89bc1171ed404f908d3b0dae9c36c49b, State: Initialized, Role: FOLLOWER
I20260430 07:54:21.319403  1941 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 217, Last appended: 2.217, Last appended by leader: 217, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:21.320586  1941 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b: Time spent starting tablet: real 0.003s	user 0.002s	sys 0.000s
I20260430 07:54:21.322192  1366 tablet_copy_service.cc:342] P 155ec5af1cab42a6aed09cdaab8a150f: Request end of tablet copy session 89bc1171ed404f908d3b0dae9c36c49b-40d7739e889e4609ac09c7384fa25c88 received from {username='slave'} at 127.0.105.3:34905
I20260430 07:54:21.322357  1366 tablet_copy_service.cc:434] P 155ec5af1cab42a6aed09cdaab8a150f: ending tablet copy session 89bc1171ed404f908d3b0dae9c36c49b-40d7739e889e4609ac09c7384fa25c88 on tablet 40d7739e889e4609ac09c7384fa25c88 with peer 89bc1171ed404f908d3b0dae9c36c49b
I20260430 07:54:22.464365  1949 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Starting pre-election (detected failure of leader 155ec5af1cab42a6aed09cdaab8a150f)
I20260430 07:54:22.464570  1949 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:22.465242  1950 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Starting pre-election (detected failure of leader 155ec5af1cab42a6aed09cdaab8a150f)
I20260430 07:54:22.465425  1950 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:22.466451  1950 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:22.466959  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:22.466976  1215 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281" is_pre_election: true
I20260430 07:54:22.467213  1476 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:22.467254  1477 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 2.
I20260430 07:54:22.467243  1215 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 2.
I20260430 07:54:22.467628  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:22.467684  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:22.467696  1345 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:22.467888  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 2.
I20260430 07:54:22.467887  1346 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: voting while tombstoned based on last-logged opid 2.219
I20260430 07:54:22.468081  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 2.
I20260430 07:54:22.468214  1542 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 530ef53bc12a4d93b2eca797cd0b7281, 89bc1171ed404f908d3b0dae9c36c49b; no voters: 
I20260430 07:54:22.468732  1148 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281, 9fcc3716c75543efaca7f7f4f0aec6fc; no voters: 
I20260430 07:54:22.468750  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:22.468950  1608 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 2.
I20260430 07:54:22.469152  1949 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:22.469528  1949 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Leader pre-election won for term 3
I20260430 07:54:22.469645  1949 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Starting leader election (detected failure of leader 155ec5af1cab42a6aed09cdaab8a150f)
I20260430 07:54:22.469269  1950 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20260430 07:54:22.469769  1949 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:54:22.469802  1950 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Starting leader election (detected failure of leader 155ec5af1cab42a6aed09cdaab8a150f)
I20260430 07:54:22.470093  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:22.470345  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 2.
I20260430 07:54:22.470587  1950 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:54:22.472474  1949 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:22.472725  1950 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:22.474011  1950 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 3 election: Requested vote from peers 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:22.474327  1949 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 3 election: Requested vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:22.474536  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f"
I20260430 07:54:22.474558  1345 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f"
I20260430 07:54:22.474745  1215 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281"
I20260430 07:54:22.475135  1215 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Leader election vote request: Denying vote to candidate 05ac35aa75a24aa7affb7d3e3645707f in current term 3: Already voted for candidate 530ef53bc12a4d93b2eca797cd0b7281 in this term.
I20260430 07:54:22.475138  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:22.475242  1476 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b"
I20260430 07:54:22.475344  1739 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:54:22.474787  1346 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: voting while tombstoned based on last-logged opid 2.219
I20260430 07:54:22.475474  1476 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:54:22.475522  1346 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:54:22.477061  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f"
I20260430 07:54:22.477380  1608 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Leader election vote request: Denying vote to candidate 530ef53bc12a4d93b2eca797cd0b7281 in current term 3: Already voted for candidate 05ac35aa75a24aa7affb7d3e3645707f in this term.
I20260430 07:54:22.477635  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b"
I20260430 07:54:22.477793  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "05ac35aa75a24aa7affb7d3e3645707f" candidate_term: 3 candidate_status { last_received { term: 2 index: 219 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:22.478623  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 3.
I20260430 07:54:22.479472  1150 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 3 election: Election decided. Result: candidate lost. Election summary: received 4 responses out of 5 voters: 1 yes votes; 3 no votes. yes voters: 530ef53bc12a4d93b2eca797cd0b7281; no voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 89bc1171ed404f908d3b0dae9c36c49b
I20260430 07:54:22.479682  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 3.
I20260430 07:54:22.480783  1476 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 05ac35aa75a24aa7affb7d3e3645707f in term 3.
I20260430 07:54:22.481117  1950 raft_consensus.cc:2749] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Leader election lost for term 3. Reason: could not achieve majority
I20260430 07:54:22.481516  1543 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 5 responses out of 5 voters: 3 yes votes; 2 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 89bc1171ed404f908d3b0dae9c36c49b; no voters: 530ef53bc12a4d93b2eca797cd0b7281, 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:22.482183  1949 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Leader election won for term 3
I20260430 07:54:22.482398  1949 raft_consensus.cc:697] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 LEADER]: Becoming Leader. State: Replica: 05ac35aa75a24aa7affb7d3e3645707f, State: Running, Role: LEADER
I20260430 07:54:22.483358  1949 consensus_queue.cc:237] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 219, Committed index: 219, Last appended: 2.219, Last appended by leader: 219, Current term: 3, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:22.489852  1072 catalog_manager.cc:5671] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f reported cstate change: term changed from 2 to 3, leader changed from 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2) to 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4). New cstate: current_term: 3 leader_uuid: "05ac35aa75a24aa7affb7d3e3645707f" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } health_report { overall_health: UNKNOWN } } }
I20260430 07:54:22.544530  1215 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 2 index: 219. Preceding OpId from leader: term: 3 index: 221. (index mismatch)
I20260430 07:54:22.544530  1739 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 2 index: 219. Preceding OpId from leader: term: 3 index: 221. (index mismatch)
W20260430 07:54:22.545022  1541 consensus_peers.cc:597] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f -> Peer 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647): Couldn't send request to peer 155ec5af1cab42a6aed09cdaab8a150f. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:22.545270  1958 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 220, Last known committed idx: 219, Time since last communication: 0.000s
I20260430 07:54:22.545555  1476 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Refusing update from remote peer 05ac35aa75a24aa7affb7d3e3645707f: Log matching property violated. Preceding OpId in replica: term: 2 index: 217. Preceding OpId from leader: term: 3 index: 221. (index mismatch)
I20260430 07:54:22.545576  1949 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 220, Last known committed idx: 219, Time since last communication: 0.000s
I20260430 07:54:22.546279  1951 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [LEADER]: Connected to new peer: Peer: permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 220, Last known committed idx: 217, Time since last communication: 0.000s
I20260430 07:54:22.572082   420 cluster_itest_util.cc:258] Not converged past 1 yet: <uninitialized op> 3.221 3.221 3.221 3.221
I20260430 07:54:22.572134  1964 mvcc.cc:204] Tried to move back new op lower bound from 7280786073774231552 to 7280786073546956800. Current Snapshot: MvccSnapshot[applied={T|T < 7280786073774231552 or (T in {7280786073774231552})}]
I20260430 07:54:22.680166   420 cluster_itest_util.cc:258] Not converged past 1 yet: <uninitialized op> 3.221 3.221 3.221 3.221
I20260430 07:54:22.890203   420 cluster_itest_util.cc:258] Not converged past 1 yet: <uninitialized op> 3.221 3.221 3.221 3.221
I20260430 07:54:23.143976  1973 ts_tablet_manager.cc:933] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Initiating tablet copy from peer 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799)
I20260430 07:54:23.145147  1973 tablet_copy_client.cc:323] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.4:45799
I20260430 07:54:23.146409  1628 tablet_copy_service.cc:140] P 05ac35aa75a24aa7affb7d3e3645707f: Received BeginTabletCopySession request for tablet 40d7739e889e4609ac09c7384fa25c88 from peer 155ec5af1cab42a6aed09cdaab8a150f ({username='slave'} at 127.0.105.2:34337)
I20260430 07:54:23.146624  1628 tablet_copy_service.cc:161] P 05ac35aa75a24aa7affb7d3e3645707f: Beginning new tablet copy session on tablet 40d7739e889e4609ac09c7384fa25c88 from peer 155ec5af1cab42a6aed09cdaab8a150f at {username='slave'} at 127.0.105.2:34337: session id = 155ec5af1cab42a6aed09cdaab8a150f-40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:23.148783  1628 tablet_copy_source_session.cc:215] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:23.150750  1973 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:54:23.156327  1973 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 2.219
I20260430 07:54:23.156765  1973 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:23.159576  1973 tablet_copy_client.cc:806] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:23.159976  1973 tablet_copy_client.cc:670] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:23.165136  1973 tablet_copy_client.cc:538] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:23.169137  1973 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Bootstrap starting.
I20260430 07:54:23.200896   420 cluster_itest_util.cc:258] Not converged past 1 yet: <uninitialized op> 3.221 3.221 3.221 3.221
I20260430 07:54:23.445691  1973 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Bootstrap replayed 1/1 log segments. Stats: ops{read=221 overwritten=0 applied=221 ignored=0} inserts{seen=218 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:23.446166  1973 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Bootstrap complete.
I20260430 07:54:23.446408  1973 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Time spent bootstrapping tablet: real 0.277s	user 0.230s	sys 0.050s
I20260430 07:54:23.447265  1973 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:23.447440  1973 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 155ec5af1cab42a6aed09cdaab8a150f, State: Initialized, Role: FOLLOWER
I20260430 07:54:23.447785  1973 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 221, Last appended: 3.221, Last appended by leader: 221, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:23.448755  1973 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f: Time spent starting tablet: real 0.002s	user 0.003s	sys 0.001s
I20260430 07:54:23.449920  1628 tablet_copy_service.cc:342] P 05ac35aa75a24aa7affb7d3e3645707f: Request end of tablet copy session 155ec5af1cab42a6aed09cdaab8a150f-40d7739e889e4609ac09c7384fa25c88 received from {username='slave'} at 127.0.105.2:34337
I20260430 07:54:23.450594  1628 tablet_copy_service.cc:434] P 05ac35aa75a24aa7affb7d3e3645707f: ending tablet copy session 155ec5af1cab42a6aed09cdaab8a150f-40d7739e889e4609ac09c7384fa25c88 on tablet 40d7739e889e4609ac09c7384fa25c88 with peer 155ec5af1cab42a6aed09cdaab8a150f
I20260430 07:54:23.611140   420 tablet_copy-itest.cc:761] Iteration 3
I20260430 07:54:23.674741  1346 raft_consensus.cc:1217] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Deduplicated request from leader. Original: 2.219->[3.220-3.224]   Dedup: 3.221->[3.222-3.224]
I20260430 07:54:23.913996   420 tablet_copy-itest.cc:780] Tombstoning follower tablet 40d7739e889e4609ac09c7384fa25c88 on TS 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:23.914985  1719 tablet_service.cc:1558] Processing DeleteTablet for tablet 40d7739e889e4609ac09c7384fa25c88 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:42396
I20260430 07:54:23.934971  1999 tablet_replica.cc:333] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: stopping tablet replica
I20260430 07:54:23.936764  1999 raft_consensus.cc:2243] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Raft consensus shutting down.
I20260430 07:54:23.937433  1999 pending_rounds.cc:70] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Trying to abort 1 pending ops.
I20260430 07:54:23.937611  1999 pending_rounds.cc:77] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Aborting op as it isn't in flight: id { term: 3 index: 242 } timestamp: 7280786079385546752 op_type: WRITE_OP write_request { tablet_id: "40d7739e889e4609ac09c7384fa25c88" schema { columns { name: "key" type: INT32 is_key: true is_nullable: false immutable: false } columns { name: "int_val" type: INT32 is_key: false is_nullable: false immutable: false } columns { name: "string_val" type: STRING is_key: false is_nullable: true immutable: false } } row_operations { rows: "<redacted>""\001\007\000\014\332)\024\316\367{\377\000\000\000\000\000\000\000\000\000@\000\000\000\000\000\000" indirect_data: "<redacted>""0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" } external_consistency_mode: CLIENT_PROPAGATED propagated_timestamp: 7280786079320035328 authz_token { token_data: "\010\333\227\314\317\006\"3\n\005slave\022*\n 98919faf7f4347f1b945d0c425317cc9\020\001\030\001 \001(\001" signature: "<redacted>""n\333\336\222\3311ar\341\336\034\302\003!7O\225\347\337\256s\031\246\240\3325\206\337\003Y\265\375\300\023\370%\235\303\237a\343\262uD\001\335\331\306\231(j\225\201%\370\352\271.\326\245\304{\370M" signing_key_seq_num: 0 } } request_id { client_id: "5a255d4eee2b4744aec1f9b3e10df961" seq_no: 238 first_incomplete_seq_no: 238 attempt_no: 0 }
I20260430 07:54:23.938505  1999 raft_consensus.cc:2272] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Raft consensus is shut down!
I20260430 07:54:23.939630  1999 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
W20260430 07:54:23.944092  1542 consensus_peers.cc:597] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f -> Peer 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847): Couldn't send request to peer 9fcc3716c75543efaca7f7f4f0aec6fc. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: STOPPING. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:23.945849  1999 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.242
I20260430 07:54:23.946130  1999 log.cc:1199] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/wal/wals/40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:25.144213  2001 ts_tablet_manager.cc:933] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Initiating tablet copy from peer 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799)
I20260430 07:54:25.145107  2001 tablet_copy_client.cc:323] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.4:45799
I20260430 07:54:25.146867  1628 tablet_copy_service.cc:140] P 05ac35aa75a24aa7affb7d3e3645707f: Received BeginTabletCopySession request for tablet 40d7739e889e4609ac09c7384fa25c88 from peer 9fcc3716c75543efaca7f7f4f0aec6fc ({username='slave'} at 127.0.105.5:40793)
I20260430 07:54:25.147050  1628 tablet_copy_service.cc:161] P 05ac35aa75a24aa7affb7d3e3645707f: Beginning new tablet copy session on tablet 40d7739e889e4609ac09c7384fa25c88 from peer 9fcc3716c75543efaca7f7f4f0aec6fc at {username='slave'} at 127.0.105.5:40793: session id = 9fcc3716c75543efaca7f7f4f0aec6fc-40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:25.154703  1628 tablet_copy_source_session.cc:215] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:25.156054  2001 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:54:25.164817  2001 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 3.242
I20260430 07:54:25.165321  2001 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:25.166899   420 tablet_copy-itest.cc:790] Tombstoning leader tablet 40d7739e889e4609ac09c7384fa25c88 on TS 05ac35aa75a24aa7affb7d3e3645707f
I20260430 07:54:25.168144  1588 tablet_service.cc:1558] Processing DeleteTablet for tablet 40d7739e889e4609ac09c7384fa25c88 with delete_type TABLET_DATA_TOMBSTONED from {username='slave'} at 127.0.0.1:60160
I20260430 07:54:25.171265  2003 tablet_replica.cc:333] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: stopping tablet replica
I20260430 07:54:25.171591  2001 tablet_copy_client.cc:806] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:25.171572  2003 raft_consensus.cc:2243] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 LEADER]: Raft consensus shutting down.
I20260430 07:54:25.171906  2001 tablet_copy_client.cc:670] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:25.176334  2003 raft_consensus.cc:2272] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Raft consensus is shut down!
I20260430 07:54:25.178824  2003 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:54:25.180380  2001 tablet_copy_client.cc:538] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:25.183908  2001 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap starting.
W20260430 07:54:25.184958  1175 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:56904: Illegal state: replica 530ef53bc12a4d93b2eca797cd0b7281 is not leader of this config: current role FOLLOWER
I20260430 07:54:25.186626  2003 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.323
I20260430 07:54:25.186754  2003 log.cc:1199] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/wal/wals/40d7739e889e4609ac09c7384fa25c88
W20260430 07:54:25.190168  1437 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:52424: Illegal state: replica 89bc1171ed404f908d3b0dae9c36c49b is not leader of this config: current role FOLLOWER
W20260430 07:54:25.202244  1306 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:34100: Illegal state: replica 155ec5af1cab42a6aed09cdaab8a150f is not leader of this config: current role FOLLOWER
I20260430 07:54:25.704087  2001 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap replayed 1/1 log segments. Stats: ops{read=322 overwritten=0 applied=321 ignored=0} inserts{seen=318 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20260430 07:54:25.704638  2001 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Bootstrap complete.
I20260430 07:54:25.704911  2001 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Time spent bootstrapping tablet: real 0.521s	user 0.432s	sys 0.088s
I20260430 07:54:25.705756  2001 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:25.707412  2001 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9fcc3716c75543efaca7f7f4f0aec6fc, State: Initialized, Role: FOLLOWER
I20260430 07:54:25.710572  2001 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 321, Last appended: 3.322, Last appended by leader: 322, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:25.714792  2001 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc: Time spent starting tablet: real 0.007s	user 0.010s	sys 0.000s
I20260430 07:54:25.716229  1628 tablet_copy_service.cc:342] P 05ac35aa75a24aa7affb7d3e3645707f: Request end of tablet copy session 9fcc3716c75543efaca7f7f4f0aec6fc-40d7739e889e4609ac09c7384fa25c88 received from {username='slave'} at 127.0.105.5:40793
I20260430 07:54:25.717147  1628 tablet_copy_service.cc:434] P 05ac35aa75a24aa7affb7d3e3645707f: ending tablet copy session 9fcc3716c75543efaca7f7f4f0aec6fc-40d7739e889e4609ac09c7384fa25c88 on tablet 40d7739e889e4609ac09c7384fa25c88 with peer 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:26.670987  2007 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Starting pre-election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:26.671273  2007 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:26.671784  2008 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Starting pre-election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:26.671947  2008 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:26.673624  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:26.673646  1476 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:26.673794  2007 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:26.673961  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 3.
I20260430 07:54:26.673019  2008 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:26.674448  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b" is_pre_election: true
I20260430 07:54:26.674504  1607 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:26.674599  1608 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:26.674719  1607 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: voting while tombstoned based on last-logged opid 3.323
I20260430 07:54:26.674888  1607 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 3.
I20260430 07:54:26.674131  1214 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281" is_pre_election: true
I20260430 07:54:26.675385  1214 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 155ec5af1cab42a6aed09cdaab8a150f in term 3.
I20260430 07:54:26.675510  1150 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 4 responses out of 5 voters: 3 yes votes; 1 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281; no voters: 89bc1171ed404f908d3b0dae9c36c49b
I20260430 07:54:26.675539  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "155ec5af1cab42a6aed09cdaab8a150f" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:26.675634  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:26.675791  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 3.
I20260430 07:54:26.675985  2008 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Leader pre-election won for term 4
I20260430 07:54:26.676134  2008 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Starting leader election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:26.676280  2008 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:54:26.676294  1280 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate lost. Election summary: received 5 responses out of 5 voters: 2 yes votes; 3 no votes. yes voters: 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281; no voters: 05ac35aa75a24aa7affb7d3e3645707f, 89bc1171ed404f908d3b0dae9c36c49b, 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:26.676707  2007 raft_consensus.cc:2749] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Leader pre-election lost for term 4. Reason: could not achieve majority
I20260430 07:54:26.680545  2009 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Starting pre-election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:26.681224  2009 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:26.681972  2008 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:26.683234  1607 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f" is_pre_election: true
I20260430 07:54:26.683280  2009 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:26.683468  1607 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: voting while tombstoned based on last-logged opid 3.323
I20260430 07:54:26.683647  1607 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 89bc1171ed404f908d3b0dae9c36c49b in term 3.
I20260430 07:54:26.683764  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f" is_pre_election: true
I20260430 07:54:26.683854  1214 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281" is_pre_election: true
I20260430 07:54:26.684008  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 89bc1171ed404f908d3b0dae9c36c49b in term 3.
I20260430 07:54:26.684139  1214 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 4 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 89bc1171ed404f908d3b0dae9c36c49b in current term 4: Already voted for candidate 530ef53bc12a4d93b2eca797cd0b7281 in this term.
I20260430 07:54:26.684492  1410 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 5 voters: 3 yes votes; 0 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 89bc1171ed404f908d3b0dae9c36c49b; no voters: 
I20260430 07:54:26.684559  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" is_pre_election: true
I20260430 07:54:26.684785  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 89bc1171ed404f908d3b0dae9c36c49b in term 3.
I20260430 07:54:26.684873  2009 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Leader pre-election won for term 4
I20260430 07:54:26.684979  2009 raft_consensus.cc:493] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Starting leader election (detected failure of leader 05ac35aa75a24aa7affb7d3e3645707f)
I20260430 07:54:26.685124  2009 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:54:26.685874  2008 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 4 election: Requested vote from peers 89bc1171ed404f908d3b0dae9c36c49b (127.0.105.3:39471), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:26.685928  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f"
I20260430 07:54:26.685930  1477 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "89bc1171ed404f908d3b0dae9c36c49b"
I20260430 07:54:26.686156  1346 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:54:26.686748  1607 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f"
I20260430 07:54:26.686980  1607 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: voting while tombstoned based on last-logged opid 3.323
I20260430 07:54:26.687104  1607 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:54:26.689020  1346 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 4.
I20260430 07:54:26.689509  1739 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "530ef53bc12a4d93b2eca797cd0b7281" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:26.689489  2009 raft_consensus.cc:515] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:26.689759  1739 raft_consensus.cc:3060] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:54:26.689980  1477 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 4 FOLLOWER]: Leader election vote request: Denying vote to candidate 530ef53bc12a4d93b2eca797cd0b7281 in current term 4: Already voted for candidate 89bc1171ed404f908d3b0dae9c36c49b in this term.
I20260430 07:54:26.690805  1607 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 4.
I20260430 07:54:26.691548  1214 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "530ef53bc12a4d93b2eca797cd0b7281"
I20260430 07:54:26.691823  1214 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 4 FOLLOWER]: Leader election vote request: Denying vote to candidate 89bc1171ed404f908d3b0dae9c36c49b in current term 4: Already voted for candidate 530ef53bc12a4d93b2eca797cd0b7281 in this term.
I20260430 07:54:26.692238  1346 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "155ec5af1cab42a6aed09cdaab8a150f"
I20260430 07:54:26.692278  2009 leader_election.cc:290] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 4 election: Requested vote from peers 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567), 155ec5af1cab42a6aed09cdaab8a150f (127.0.105.2:39647), 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799), 9fcc3716c75543efaca7f7f4f0aec6fc (127.0.105.5:34847)
I20260430 07:54:26.692543  1346 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 4 FOLLOWER]: Leader election vote request: Denying vote to candidate 89bc1171ed404f908d3b0dae9c36c49b in current term 4: Already voted for candidate 530ef53bc12a4d93b2eca797cd0b7281 in this term.
I20260430 07:54:26.693004  1150 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 4 responses out of 5 voters: 3 yes votes; 1 no votes. yes voters: 05ac35aa75a24aa7affb7d3e3645707f, 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281; no voters: 89bc1171ed404f908d3b0dae9c36c49b
I20260430 07:54:26.693411  1738 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc"
I20260430 07:54:26.694087  1739 raft_consensus.cc:2468] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 530ef53bc12a4d93b2eca797cd0b7281 in term 4.
I20260430 07:54:26.694104  1410 leader_election.cc:304] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [CANDIDATE]: Term 4 election: Election decided. Result: candidate lost. Election summary: received 4 responses out of 5 voters: 1 yes votes; 3 no votes. yes voters: 89bc1171ed404f908d3b0dae9c36c49b; no voters: 155ec5af1cab42a6aed09cdaab8a150f, 530ef53bc12a4d93b2eca797cd0b7281, 9fcc3716c75543efaca7f7f4f0aec6fc
I20260430 07:54:26.694495  2008 raft_consensus.cc:2804] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 4 FOLLOWER]: Leader election won for term 4
I20260430 07:54:26.694496  2009 raft_consensus.cc:2749] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 4 FOLLOWER]: Leader election lost for term 4. Reason: could not achieve majority
I20260430 07:54:26.694917  1607 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "40d7739e889e4609ac09c7384fa25c88" candidate_uuid: "89bc1171ed404f908d3b0dae9c36c49b" candidate_term: 4 candidate_status { last_received { term: 3 index: 323 } } ignore_live_leader: false dest_uuid: "05ac35aa75a24aa7affb7d3e3645707f"
I20260430 07:54:26.695082  2008 raft_consensus.cc:697] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [term 4 LEADER]: Becoming Leader. State: Replica: 530ef53bc12a4d93b2eca797cd0b7281, State: Running, Role: LEADER
I20260430 07:54:26.695268  1607 raft_consensus.cc:1803] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 4 FOLLOWER]: voting while tombstoned based on last-logged opid 3.323
I20260430 07:54:26.695533  1607 raft_consensus.cc:2393] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 4 FOLLOWER]: Leader election vote request: Denying vote to candidate 89bc1171ed404f908d3b0dae9c36c49b in current term 4: Already voted for candidate 530ef53bc12a4d93b2eca797cd0b7281 in this term.
I20260430 07:54:26.695766  2008 consensus_queue.cc:237] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 323, Committed index: 323, Last appended: 3.323, Last appended by leader: 323, Current term: 4, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:26.701732  1072 catalog_manager.cc:5671] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 reported cstate change: term changed from 3 to 4, leader changed from 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4) to 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1). New cstate: current_term: 4 leader_uuid: "530ef53bc12a4d93b2eca797cd0b7281" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } health_report { overall_health: UNKNOWN } } }
I20260430 07:54:26.757313  1739 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 9fcc3716c75543efaca7f7f4f0aec6fc [term 4 FOLLOWER]: Refusing update from remote peer 530ef53bc12a4d93b2eca797cd0b7281: Log matching property violated. Preceding OpId in replica: term: 3 index: 322. Preceding OpId from leader: term: 4 index: 325. (index mismatch)
I20260430 07:54:26.757871  1346 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 155ec5af1cab42a6aed09cdaab8a150f [term 4 FOLLOWER]: Refusing update from remote peer 530ef53bc12a4d93b2eca797cd0b7281: Log matching property violated. Preceding OpId in replica: term: 3 index: 323. Preceding OpId from leader: term: 4 index: 325. (index mismatch)
I20260430 07:54:26.760385  1477 raft_consensus.cc:1275] T 40d7739e889e4609ac09c7384fa25c88 P 89bc1171ed404f908d3b0dae9c36c49b [term 4 FOLLOWER]: Refusing update from remote peer 530ef53bc12a4d93b2eca797cd0b7281: Log matching property violated. Preceding OpId in replica: term: 3 index: 323. Preceding OpId from leader: term: 4 index: 325. (index mismatch)
I20260430 07:54:26.760411  2018 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [LEADER]: Connected to new peer: Peer: permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 324, Last known committed idx: 323, Time since last communication: 0.000s
I20260430 07:54:26.761404  2008 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 324, Last known committed idx: 321, Time since last communication: 0.000s
W20260430 07:54:26.762637  1150 consensus_peers.cc:597] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 -> Peer 05ac35aa75a24aa7affb7d3e3645707f (127.0.105.4:45799): Couldn't send request to peer 05ac35aa75a24aa7affb7d3e3645707f. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:54:26.762640  2017 consensus_queue.cc:1048] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281 [LEADER]: Connected to new peer: Peer: permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 324, Last known committed idx: 323, Time since last communication: 0.001s
I20260430 07:54:26.800153  2020 mvcc.cc:204] Tried to move back new op lower bound from 7280786091025764352 to 7280786090792009728. Current Snapshot: MvccSnapshot[applied={T|T < 7280786091025764352}]
I20260430 07:54:26.809768   420 cluster_itest_util.cc:258] Not converged past 1 yet: 4.325 4.325 4.325 4.325 <uninitialized op>
I20260430 07:54:26.920394   420 cluster_itest_util.cc:258] Not converged past 1 yet: 4.325 4.325 4.325 4.325 <uninitialized op>
I20260430 07:54:27.135618   420 cluster_itest_util.cc:258] Not converged past 1 yet: 4.325 4.325 4.325 4.325 <uninitialized op>
I20260430 07:54:27.234560  2031 ts_tablet_manager.cc:933] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Initiating tablet copy from peer 530ef53bc12a4d93b2eca797cd0b7281 (127.0.105.1:42567)
I20260430 07:54:27.235918  2031 tablet_copy_client.cc:323] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.1:42567
I20260430 07:54:27.238402  1235 tablet_copy_service.cc:140] P 530ef53bc12a4d93b2eca797cd0b7281: Received BeginTabletCopySession request for tablet 40d7739e889e4609ac09c7384fa25c88 from peer 05ac35aa75a24aa7affb7d3e3645707f ({username='slave'} at 127.0.105.4:40033)
I20260430 07:54:27.239095  1235 tablet_copy_service.cc:161] P 530ef53bc12a4d93b2eca797cd0b7281: Beginning new tablet copy session on tablet 40d7739e889e4609ac09c7384fa25c88 from peer 05ac35aa75a24aa7affb7d3e3645707f at {username='slave'} at 127.0.105.4:40033: session id = 05ac35aa75a24aa7affb7d3e3645707f-40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:27.244916  1235 tablet_copy_source_session.cc:215] T 40d7739e889e4609ac09c7384fa25c88 P 530ef53bc12a4d93b2eca797cd0b7281: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:27.247126  2031 ts_tablet_manager.cc:1916] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:54:27.257386  2031 ts_tablet_manager.cc:1929] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 3.323
I20260430 07:54:27.257946  2031 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 40d7739e889e4609ac09c7384fa25c88. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:27.265029  2031 tablet_copy_client.cc:806] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:27.265650  2031 tablet_copy_client.cc:670] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:27.275564  2031 tablet_copy_client.cc:538] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:27.283859  2031 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap starting.
I20260430 07:54:27.445609   420 cluster_itest_util.cc:258] Not converged past 1 yet: 4.325 4.325 4.325 4.325 <uninitialized op>
I20260430 07:54:27.857416   420 cluster_itest_util.cc:258] Not converged past 1 yet: 4.325 4.325 4.325 4.325 <uninitialized op>
I20260430 07:54:27.881870  2031 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap replayed 1/1 log segments. Stats: ops{read=325 overwritten=0 applied=325 ignored=0} inserts{seen=321 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:27.882970  2031 tablet_bootstrap.cc:492] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Bootstrap complete.
I20260430 07:54:27.883725  2031 ts_tablet_manager.cc:1403] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Time spent bootstrapping tablet: real 0.600s	user 0.507s	sys 0.092s
I20260430 07:54:27.884635  2031 raft_consensus.cc:359] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 4 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:27.884817  2031 raft_consensus.cc:740] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 4 FOLLOWER]: Becoming Follower/Learner. State: Replica: 05ac35aa75a24aa7affb7d3e3645707f, State: Initialized, Role: FOLLOWER
I20260430 07:54:27.885100  2031 consensus_queue.cc:260] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 325, Last appended: 4.325, Last appended by leader: 325, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "530ef53bc12a4d93b2eca797cd0b7281" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42567 } } peers { permanent_uuid: "89bc1171ed404f908d3b0dae9c36c49b" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 39471 } } peers { permanent_uuid: "155ec5af1cab42a6aed09cdaab8a150f" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 39647 } } peers { permanent_uuid: "05ac35aa75a24aa7affb7d3e3645707f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 45799 } } peers { permanent_uuid: "9fcc3716c75543efaca7f7f4f0aec6fc" member_type: VOTER last_known_addr { host: "127.0.105.5" port: 34847 } }
I20260430 07:54:27.886545  2031 ts_tablet_manager.cc:1434] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f: Time spent starting tablet: real 0.003s	user 0.005s	sys 0.000s
I20260430 07:54:27.888870  1235 tablet_copy_service.cc:342] P 530ef53bc12a4d93b2eca797cd0b7281: Request end of tablet copy session 05ac35aa75a24aa7affb7d3e3645707f-40d7739e889e4609ac09c7384fa25c88 received from {username='slave'} at 127.0.105.4:40033
I20260430 07:54:27.890179  1235 tablet_copy_service.cc:434] P 530ef53bc12a4d93b2eca797cd0b7281: ending tablet copy session 05ac35aa75a24aa7affb7d3e3645707f-40d7739e889e4609ac09c7384fa25c88 on tablet 40d7739e889e4609ac09c7384fa25c88 with peer 05ac35aa75a24aa7affb7d3e3645707f
I20260430 07:54:28.316546  1607 raft_consensus.cc:1217] T 40d7739e889e4609ac09c7384fa25c88 P 05ac35aa75a24aa7affb7d3e3645707f [term 4 FOLLOWER]: Deduplicated request from leader. Original: 3.323->[4.324-4.325]   Dedup: 4.325->[]
I20260430 07:54:28.510135  1588 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:54:28.520028  1326 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:54:28.538069  1195 tablet_service.cc:1467] Tablet server has 1 leaders and 0 scanners
I20260430 07:54:28.555049  1457 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:54:28.561484  1719 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |      Address       | Status
----------------------------------+--------------------+---------
 9416b36e8a7e46edaa84c3da39eb9c2e | 127.0.105.62:38847 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                                      Value                                                                                      |      Tags       |         Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                                             | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                                             | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                                            | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                               | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                                            | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                                            | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                                              | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                                             | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |       Value        |         Master
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:44475 | all 1 server(s) checked
 time_source         | builtin            | all 1 server(s) checked

Tablet Server Summary
               UUID               |      Address      | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
 05ac35aa75a24aa7affb7d3e3645707f | 127.0.105.4:45799 | HEALTHY | <none>   |       0        |       0
 155ec5af1cab42a6aed09cdaab8a150f | 127.0.105.2:39647 | HEALTHY | <none>   |       0        |       0
 530ef53bc12a4d93b2eca797cd0b7281 | 127.0.105.1:42567 | HEALTHY | <none>   |       1        |       0
 89bc1171ed404f908d3b0dae9c36c49b | 127.0.105.3:39471 | HEALTHY | <none>   |       0        |       0
 9fcc3716c75543efaca7f7f4f0aec6fc | 127.0.105.5:34847 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       5

Unusual flags for Tablet Server:
               Flag               |                                                                                    Value                                                                                    |      Tags       |      Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_server_key_size             | 768                                                                                                                                                                         | experimental    | all 5 server(s) checked
 local_ip_for_outbound_sockets    | 127.0.105.1                                                                                                                                                                 | experimental    | 127.0.105.1:42567
 local_ip_for_outbound_sockets    | 127.0.105.2                                                                                                                                                                 | experimental    | 127.0.105.2:39647
 local_ip_for_outbound_sockets    | 127.0.105.3                                                                                                                                                                 | experimental    | 127.0.105.3:39471
 local_ip_for_outbound_sockets    | 127.0.105.4                                                                                                                                                                 | experimental    | 127.0.105.4:45799
 local_ip_for_outbound_sockets    | 127.0.105.5                                                                                                                                                                 | experimental    | 127.0.105.5:34847
 never_fsync                      | true                                                                                                                                                                        | unsafe,advanced | all 5 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                           | unsafe,hidden   | all 5 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                                        | unsafe          | all 5 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                                          | hidden          | all 5 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb | hidden          | 127.0.105.1:42567
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb | hidden          | 127.0.105.2:39647
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb | hidden          | 127.0.105.3:39471
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb | hidden          | 127.0.105.4:45799
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest.1777535629930685-420-0/minicluster-data/ts-4/data/info.pb | hidden          | 127.0.105.5:34847

Flags of checked categories for Tablet Server:
        Flag         |       Value        |      Tablet Server
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:44475 | all 5 server(s) checked
 time_source         | builtin            | all 5 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 6 server(s) checked

Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
     Name      | RF | Status  | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
---------------+----+---------+---------------+---------+------------+------------------+-------------
 test-workload | 5  | HEALTHY | 1             | 1       | 0          | 0                | 0

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 1
 Median         | 1
 Third Quartile | 1
 Maximum        | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 5
 Tables         | 1
 Tablets        | 1
 Replicas       | 5

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

OK
I20260430 07:54:28.631541   420 log_verifier.cc:126] Checking tablet 40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:29.156694   420 log_verifier.cc:177] Verified matching terms for 325 ops in tablet 40d7739e889e4609ac09c7384fa25c88
I20260430 07:54:29.537173   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 1133
I20260430 07:54:29.539791  1255 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:29.681970   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 1133
I20260430 07:54:29.712227   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 1264
I20260430 07:54:29.713588  1385 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:29.855684   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 1264
W20260430 07:54:29.890242  1061 connection.cc:570] server connection from 127.0.105.2:46267 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:54:29.891134   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 1395
I20260430 07:54:29.893641  1516 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:30.024370   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 1395
I20260430 07:54:30.052551   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 1526
I20260430 07:54:30.054214  1648 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:30.200551   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 1526
I20260430 07:54:30.244818   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 1657
I20260430 07:54:30.246163  1779 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:30.393895   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 1657
I20260430 07:54:30.423107   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 1043
I20260430 07:54:30.424364  1104 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:30.551302   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 1043
2026-04-30T07:54:30Z chronyd exiting
[       OK ] TabletCopyITest.TestDeleteLeaderDuringTabletCopyStressTest (19911 ms)
[ RUN      ] TabletCopyITest.TestTabletCopyThrottling
2026-04-30T07:54:30Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2026-04-30T07:54:30Z Disabled control of system clock
I20260430 07:54:30.653901   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:40041
--webserver_interface=127.0.105.62
--webserver_port=0
--builtin_ntp_servers=127.0.105.20:45163
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:40041
--master_tombstone_evicted_tablet_replicas=false with env {}
W20260430 07:54:31.127923  2089 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:31.128518  2089 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:31.128656  2089 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:31.142638  2089 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:54:31.142817  2089 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:31.143052  2089 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:54:31.143172  2089 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:54:31.157526  2089 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:45163
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
--master_tombstone_evicted_tablet_replicas=false
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:40041
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:40041
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:31.160178  2089 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:31.164412  2089 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:31.181003  2094 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:31.181151  2095 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:31.184708  2097 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:31.186796  2089 server_base.cc:1061] running on GCE node
I20260430 07:54:31.188361  2089 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:31.191119  2089 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:31.192540  2089 hybrid_clock.cc:648] HybridClock initialized: now 1777535671192452 us; error 90 us; skew 500 ppm
I20260430 07:54:31.193136  2089 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:31.198278  2089 webserver.cc:492] Webserver started at http://127.0.105.62:34271/ using document root <none> and password file <none>
I20260430 07:54:31.200476  2089 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:31.201555  2089 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:31.203161  2089 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:31.207361  2089 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/instance:
uuid: "7a726e46b5764d19a25c166ef9a48881"
format_stamp: "Formatted at 2026-04-30 07:54:31 on dist-test-slave-1g5s"
I20260430 07:54:31.209509  2089 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal/instance:
uuid: "7a726e46b5764d19a25c166ef9a48881"
format_stamp: "Formatted at 2026-04-30 07:54:31 on dist-test-slave-1g5s"
I20260430 07:54:31.219842  2089 fs_manager.cc:696] Time spent creating directory manager: real 0.010s	user 0.006s	sys 0.002s
I20260430 07:54:31.227453  2103 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:31.230443  2089 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.006s	sys 0.000s
I20260430 07:54:31.230728  2089 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "7a726e46b5764d19a25c166ef9a48881"
format_stamp: "Formatted at 2026-04-30 07:54:31 on dist-test-slave-1g5s"
I20260430 07:54:31.231017  2089 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:31.268857  2089 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:31.270893  2089 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:31.271339  2089 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:31.314703  2089 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:40041
I20260430 07:54:31.314687  2154 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:40041 every 8 connection(s)
I20260430 07:54:31.316664  2089 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:54:31.319331   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2089
I20260430 07:54:31.319523   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal/instance
I20260430 07:54:31.328693  2155 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:31.345803  2155 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Bootstrap starting.
I20260430 07:54:31.350072  2155 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:31.351363  2155 log.cc:826] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:31.354907  2155 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: No bootstrap required, opened a new log
I20260430 07:54:31.369282  2155 raft_consensus.cc:359] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:31.369930  2155 raft_consensus.cc:385] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:31.370554  2155 raft_consensus.cc:740] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7a726e46b5764d19a25c166ef9a48881, State: Initialized, Role: FOLLOWER
I20260430 07:54:31.373893  2155 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:31.374872  2155 raft_consensus.cc:399] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:31.375240  2155 raft_consensus.cc:493] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:31.375532  2155 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:31.379580  2155 raft_consensus.cc:515] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:31.380321  2155 leader_election.cc:304] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7a726e46b5764d19a25c166ef9a48881; no voters: 
I20260430 07:54:31.381109  2155 leader_election.cc:290] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:31.381429  2160 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:31.382472  2160 raft_consensus.cc:697] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 LEADER]: Becoming Leader. State: Replica: 7a726e46b5764d19a25c166ef9a48881, State: Running, Role: LEADER
I20260430 07:54:31.384189  2155 sys_catalog.cc:565] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:54:31.384087  2160 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:31.391640  2161 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7a726e46b5764d19a25c166ef9a48881" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } } }
I20260430 07:54:31.392149  2161 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:31.392362  2162 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7a726e46b5764d19a25c166ef9a48881. Latest consensus state: current_term: 1 leader_uuid: "7a726e46b5764d19a25c166ef9a48881" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } } }
I20260430 07:54:31.392762  2162 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:31.395854  2169 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:54:31.408054  2169 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:54:31.426195  2169 catalog_manager.cc:1357] Generated new cluster ID: 549bf5b72ee54d9986af395991a40658
I20260430 07:54:31.426371  2169 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:54:31.475440  2169 catalog_manager.cc:1380] Generated new certificate authority record
I20260430 07:54:31.478710  2169 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:54:31.493944  2169 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Generated new TSK 0
I20260430 07:54:31.495702  2169 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:54:31.515443   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:0
--local_ip_for_outbound_sockets=127.0.105.1
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:40041
--builtin_ntp_servers=127.0.105.20:45163
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--num_tablets_to_copy_simultaneously=1 with env {}
W20260430 07:54:32.042809  2179 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:32.043293  2179 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:32.043447  2179 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:32.055763  2179 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:32.056646  2179 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:54:32.080483  2179 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:45163
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:40041
--num_tablets_to_copy_simultaneously=1
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:32.082568  2179 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:32.084800  2179 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:32.111495  2184 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:32.113071  2185 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:32.113431  2187 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:32.114560  2179 server_base.cc:1061] running on GCE node
I20260430 07:54:32.115428  2179 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:32.116739  2179 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:32.118389  2179 hybrid_clock.cc:648] HybridClock initialized: now 1777535672118312 us; error 58 us; skew 500 ppm
I20260430 07:54:32.118978  2179 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:32.123392  2179 webserver.cc:492] Webserver started at http://127.0.105.1:43233/ using document root <none> and password file <none>
I20260430 07:54:32.124893  2179 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:32.125671  2179 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:32.126235  2179 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:32.130632  2179 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data/instance:
uuid: "e258ceb9443b4cacb91a53c0eaba9385"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:32.134588  2179 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal/instance:
uuid: "e258ceb9443b4cacb91a53c0eaba9385"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:32.149761  2179 fs_manager.cc:696] Time spent creating directory manager: real 0.014s	user 0.011s	sys 0.003s
I20260430 07:54:32.158782  2193 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:32.162045  2179 fs_manager.cc:730] Time spent opening block manager: real 0.008s	user 0.004s	sys 0.002s
I20260430 07:54:32.162336  2179 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "e258ceb9443b4cacb91a53c0eaba9385"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:32.162618  2179 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:32.193348  2179 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:32.195509  2179 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:32.195935  2179 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:32.197561  2179 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:32.200037  2179 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:32.200223  2179 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:32.200384  2179 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:32.200474  2179 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:32.266281  2179 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:35727
I20260430 07:54:32.266325  2305 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:35727 every 8 connection(s)
I20260430 07:54:32.268486  2179 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:54:32.275803   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2179
I20260430 07:54:32.277562   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-0/wal/instance
I20260430 07:54:32.282878   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:0
--local_ip_for_outbound_sockets=127.0.105.2
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:40041
--builtin_ntp_servers=127.0.105.20:45163
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--num_tablets_to_copy_simultaneously=1 with env {}
I20260430 07:54:32.313393  2306 heartbeater.cc:344] Connected to a master server at 127.0.105.62:40041
I20260430 07:54:32.314492  2306 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:32.315558  2306 heartbeater.cc:507] Master 127.0.105.62:40041 requested a full tablet report, sending...
I20260430 07:54:32.320859  2120 ts_manager.cc:194] Registered new tserver with Master: e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1:35727)
I20260430 07:54:32.323532  2120 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:32963
W20260430 07:54:32.794231  2310 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:32.794624  2310 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:32.794806  2310 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:32.805290  2310 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:32.805545  2310 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:54:32.820744  2310 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:45163
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:40041
--num_tablets_to_copy_simultaneously=1
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:32.825057  2310 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:32.828469  2310 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:32.843817  2315 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:32.843858  2316 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:32.845719  2318 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:32.847052  2310 server_base.cc:1061] running on GCE node
I20260430 07:54:32.848299  2310 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:32.849740  2310 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:32.851040  2310 hybrid_clock.cc:648] HybridClock initialized: now 1777535672850960 us; error 88 us; skew 500 ppm
I20260430 07:54:32.851492  2310 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:32.854645  2310 webserver.cc:492] Webserver started at http://127.0.105.2:41737/ using document root <none> and password file <none>
I20260430 07:54:32.855633  2310 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:32.855751  2310 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:32.856202  2310 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:32.858954  2310 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/instance:
uuid: "b2b974e3ff49408bba2efdb1fcaacfe1"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:32.860067  2310 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal/instance:
uuid: "b2b974e3ff49408bba2efdb1fcaacfe1"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:32.867041  2310 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.007s	sys 0.001s
I20260430 07:54:32.872088  2324 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:32.875289  2310 fs_manager.cc:730] Time spent opening block manager: real 0.006s	user 0.002s	sys 0.000s
I20260430 07:54:32.875591  2310 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "b2b974e3ff49408bba2efdb1fcaacfe1"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:32.876029  2310 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:32.925989  2310 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:32.927177  2310 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:32.927526  2310 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:32.930809  2310 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:32.935190  2310 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:32.935443  2310 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:32.936118  2310 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:32.936568  2310 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:32.998350  2310 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:42877
I20260430 07:54:32.998425  2436 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:42877 every 8 connection(s)
I20260430 07:54:33.000047  2310 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:54:33.002882   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2310
I20260430 07:54:33.003150   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal/instance
I20260430 07:54:33.023168  2437 heartbeater.cc:344] Connected to a master server at 127.0.105.62:40041
I20260430 07:54:33.023660  2437 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:33.025089  2437 heartbeater.cc:507] Master 127.0.105.62:40041 requested a full tablet report, sending...
I20260430 07:54:33.027524  2120 ts_manager.cc:194] Registered new tserver with Master: b2b974e3ff49408bba2efdb1fcaacfe1 (127.0.105.2:42877)
I20260430 07:54:33.028461  2120 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:58461
I20260430 07:54:33.041004   420 external_mini_cluster.cc:949] 2 TS(s) registered with all masters
I20260430 07:54:33.067620   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2310
I20260430 07:54:33.084915  2432 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:33.199301   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2310
I20260430 07:54:33.218813   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2089
I20260430 07:54:33.220264  2150 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:33.327937  2306 heartbeater.cc:499] Master 127.0.105.62:40041 was elected leader, sending a full tablet report...
I20260430 07:54:33.345278   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2089
I20260430 07:54:33.373906   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:40041
--webserver_interface=127.0.105.62
--webserver_port=34271
--builtin_ntp_servers=127.0.105.20:45163
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:40041
--master_tombstone_evicted_tablet_replicas=false with env {}
W20260430 07:54:33.866197  2450 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:33.866600  2450 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:33.866732  2450 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:33.881229  2450 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:54:33.881368  2450 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:33.881489  2450 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:54:33.881584  2450 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:54:33.896319  2450 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:45163
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
--master_tombstone_evicted_tablet_replicas=false
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:40041
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:40041
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=34271
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:33.900128  2450 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:33.906327  2450 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:33.917573  2456 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:33.917573  2455 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:33.919098  2458 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:33.920274  2450 server_base.cc:1061] running on GCE node
I20260430 07:54:33.921454  2450 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:33.923548  2450 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:33.925099  2450 hybrid_clock.cc:648] HybridClock initialized: now 1777535673925083 us; error 72 us; skew 500 ppm
I20260430 07:54:33.925637  2450 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:33.928282  2450 webserver.cc:492] Webserver started at http://127.0.105.62:34271/ using document root <none> and password file <none>
I20260430 07:54:33.929229  2450 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:33.929380  2450 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:33.936218  2450 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.001s	sys 0.004s
I20260430 07:54:33.940321  2464 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:33.942257  2450 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20260430 07:54:33.942483  2450 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "7a726e46b5764d19a25c166ef9a48881"
format_stamp: "Formatted at 2026-04-30 07:54:31 on dist-test-slave-1g5s"
I20260430 07:54:33.943332  2450 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:34.000311  2450 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:34.001509  2450 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:34.001848  2450 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:34.032553  2450 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:40041
I20260430 07:54:34.032612  2515 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:40041 every 8 connection(s)
I20260430 07:54:34.034484  2450 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:54:34.044934   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2450
I20260430 07:54:34.045442  2516 sys_catalog.cc:263] Verifying existing consensus state
I20260430 07:54:34.052068  2516 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Bootstrap starting.
I20260430 07:54:34.072700  2516 log.cc:826] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:34.083937  2516 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=3 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:34.084621  2516 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Bootstrap complete.
I20260430 07:54:34.093312  2516 raft_consensus.cc:359] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:34.094202  2516 raft_consensus.cc:740] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7a726e46b5764d19a25c166ef9a48881, State: Initialized, Role: FOLLOWER
I20260430 07:54:34.095188  2516 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:34.095467  2516 raft_consensus.cc:399] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:34.095624  2516 raft_consensus.cc:493] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:34.095841  2516 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:34.099798  2516 raft_consensus.cc:515] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:34.100427  2516 leader_election.cc:304] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7a726e46b5764d19a25c166ef9a48881; no voters: 
I20260430 07:54:34.101092  2516 leader_election.cc:290] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [CANDIDATE]: Term 2 election: Requested vote from peers 
I20260430 07:54:34.101382  2521 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 2 FOLLOWER]: Leader election won for term 2
I20260430 07:54:34.102180  2521 raft_consensus.cc:697] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [term 2 LEADER]: Becoming Leader. State: Replica: 7a726e46b5764d19a25c166ef9a48881, State: Running, Role: LEADER
I20260430 07:54:34.102757  2521 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } }
I20260430 07:54:34.103576  2516 sys_catalog.cc:565] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:54:34.106472  2523 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7a726e46b5764d19a25c166ef9a48881. Latest consensus state: current_term: 2 leader_uuid: "7a726e46b5764d19a25c166ef9a48881" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } } }
I20260430 07:54:34.107878  2523 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:34.107268  2522 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "7a726e46b5764d19a25c166ef9a48881" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7a726e46b5764d19a25c166ef9a48881" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 40041 } } }
I20260430 07:54:34.108258  2522 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881 [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:34.110154  2531 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:54:34.118427  2531 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:54:34.120388  2531 catalog_manager.cc:1269] Loaded cluster ID: 549bf5b72ee54d9986af395991a40658
I20260430 07:54:34.120555  2531 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:54:34.123600  2531 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:54:34.124979  2531 catalog_manager.cc:6055] T 00000000000000000000000000000000 P 7a726e46b5764d19a25c166ef9a48881: Loaded TSK: 0
I20260430 07:54:34.126472  2531 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:54:34.358623  2481 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" instance_seqno: 1777535672251729) as {username='slave'} at 127.0.105.1:47463; Asking this server to re-register.
I20260430 07:54:34.359724  2306 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:34.360087  2306 heartbeater.cc:507] Master 127.0.105.62:40041 requested a full tablet report, sending...
I20260430 07:54:34.362396  2481 ts_manager.cc:194] Registered new tserver with Master: e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1:35727)
I20260430 07:54:34.369876   420 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20260430 07:54:34.397574  2481 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:35852:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
  rows: "<redacted>""\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_"
  indirect_data: "<redacted>"""
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20260430 07:54:34.448851  2240 tablet_service.cc:1511] Processing CreateTablet for tablet f64ef8f482c54298b8d95baf1da82a05 (DEFAULT_TABLE table=test-workload [id=d7a1ad6f83a841908fe89451abc99af8]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20260430 07:54:34.449760  2238 tablet_service.cc:1511] Processing CreateTablet for tablet deae96bf702e425a8fdd5490bbd7de0a (DEFAULT_TABLE table=test-workload [id=d7a1ad6f83a841908fe89451abc99af8]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20260430 07:54:34.449558  2239 tablet_service.cc:1511] Processing CreateTablet for tablet 9915e5b903db45249f9512369bcc088e (DEFAULT_TABLE table=test-workload [id=d7a1ad6f83a841908fe89451abc99af8]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20260430 07:54:34.448735  2241 tablet_service.cc:1511] Processing CreateTablet for tablet bb462be3b7334c22a48430595838d3d1 (DEFAULT_TABLE table=test-workload [id=d7a1ad6f83a841908fe89451abc99af8]), partition=RANGE (key) PARTITION VALUES < 536870911
I20260430 07:54:34.452555  2239 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9915e5b903db45249f9512369bcc088e. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:34.453462  2240 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f64ef8f482c54298b8d95baf1da82a05. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:34.454213  2241 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bb462be3b7334c22a48430595838d3d1. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:34.455560  2238 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet deae96bf702e425a8fdd5490bbd7de0a. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:34.479801  2548 tablet_bootstrap.cc:492] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Bootstrap starting.
I20260430 07:54:34.483858  2548 tablet_bootstrap.cc:654] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:34.485276  2548 log.cc:826] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:34.488688  2548 tablet_bootstrap.cc:492] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: No bootstrap required, opened a new log
I20260430 07:54:34.489341  2548 ts_tablet_manager.cc:1403] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Time spent bootstrapping tablet: real 0.010s	user 0.003s	sys 0.004s
I20260430 07:54:34.496011  2548 raft_consensus.cc:359] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.496464  2548 raft_consensus.cc:385] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:34.496593  2548 raft_consensus.cc:740] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Initialized, Role: FOLLOWER
I20260430 07:54:34.497471  2548 consensus_queue.cc:260] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.497732  2548 raft_consensus.cc:399] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:34.497884  2548 raft_consensus.cc:493] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:34.498064  2548 raft_consensus.cc:3060] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:34.500439  2548 raft_consensus.cc:515] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.501154  2548 leader_election.cc:304] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e258ceb9443b4cacb91a53c0eaba9385; no voters: 
I20260430 07:54:34.502080  2548 leader_election.cc:290] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:34.502357  2550 raft_consensus.cc:2804] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:34.504002  2548 ts_tablet_manager.cc:1434] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Time spent starting tablet: real 0.014s	user 0.004s	sys 0.009s
I20260430 07:54:34.504452  2548 tablet_bootstrap.cc:492] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385: Bootstrap starting.
I20260430 07:54:34.506073  2548 tablet_bootstrap.cc:654] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:34.509006  2548 tablet_bootstrap.cc:492] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385: No bootstrap required, opened a new log
I20260430 07:54:34.509294  2548 ts_tablet_manager.cc:1403] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385: Time spent bootstrapping tablet: real 0.005s	user 0.001s	sys 0.003s
I20260430 07:54:34.509387  2550 raft_consensus.cc:697] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [term 1 LEADER]: Becoming Leader. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Running, Role: LEADER
I20260430 07:54:34.510076  2550 consensus_queue.cc:237] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.510241  2548 raft_consensus.cc:359] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.510475  2548 raft_consensus.cc:385] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:34.510593  2548 raft_consensus.cc:740] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Initialized, Role: FOLLOWER
I20260430 07:54:34.510870  2548 consensus_queue.cc:260] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.511066  2548 raft_consensus.cc:399] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:34.511178  2548 raft_consensus.cc:493] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:34.511296  2548 raft_consensus.cc:3060] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:34.514408  2548 raft_consensus.cc:515] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.514864  2548 leader_election.cc:304] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e258ceb9443b4cacb91a53c0eaba9385; no voters: 
I20260430 07:54:34.515651  2548 leader_election.cc:290] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:34.516018  2551 raft_consensus.cc:2804] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:34.516274  2548 ts_tablet_manager.cc:1434] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20260430 07:54:34.516631  2548 tablet_bootstrap.cc:492] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385: Bootstrap starting.
I20260430 07:54:34.518304  2481 catalog_manager.cc:5671] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385 reported cstate change: term changed from 0 to 1, leader changed from <none> to e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1). New cstate: current_term: 1 leader_uuid: "e258ceb9443b4cacb91a53c0eaba9385" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } health_report { overall_health: HEALTHY } } }
I20260430 07:54:34.519752  2548 tablet_bootstrap.cc:654] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385: Neither blocks nor log segments found. Creating new log.
W20260430 07:54:34.526633  2307 tablet.cc:2404] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:54:34.523030  2551 raft_consensus.cc:697] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [term 1 LEADER]: Becoming Leader. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Running, Role: LEADER
I20260430 07:54:34.528672  2551 consensus_queue.cc:237] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.531034  2548 tablet_bootstrap.cc:492] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385: No bootstrap required, opened a new log
I20260430 07:54:34.531297  2548 ts_tablet_manager.cc:1403] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385: Time spent bootstrapping tablet: real 0.015s	user 0.003s	sys 0.007s
I20260430 07:54:34.532250  2548 raft_consensus.cc:359] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.532493  2548 raft_consensus.cc:385] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:34.532747  2548 raft_consensus.cc:740] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Initialized, Role: FOLLOWER
I20260430 07:54:34.533098  2548 consensus_queue.cc:260] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.533058  2481 catalog_manager.cc:5671] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385 reported cstate change: term changed from 0 to 1, leader changed from <none> to e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1). New cstate: current_term: 1 leader_uuid: "e258ceb9443b4cacb91a53c0eaba9385" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } health_report { overall_health: HEALTHY } } }
I20260430 07:54:34.533373  2548 raft_consensus.cc:399] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:34.533485  2548 raft_consensus.cc:493] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:34.533605  2548 raft_consensus.cc:3060] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:34.536870  2548 raft_consensus.cc:515] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.537348  2548 leader_election.cc:304] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e258ceb9443b4cacb91a53c0eaba9385; no voters: 
I20260430 07:54:34.537614  2548 leader_election.cc:290] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:34.537689  2551 raft_consensus.cc:2804] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:34.538434  2551 raft_consensus.cc:697] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [term 1 LEADER]: Becoming Leader. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Running, Role: LEADER
I20260430 07:54:34.538511  2548 ts_tablet_manager.cc:1434] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20260430 07:54:34.538801  2551 consensus_queue.cc:237] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.539108  2548 tablet_bootstrap.cc:492] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385: Bootstrap starting.
I20260430 07:54:34.542906  2548 tablet_bootstrap.cc:654] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:34.542959  2481 catalog_manager.cc:5671] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385 reported cstate change: term changed from 0 to 1, leader changed from <none> to e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1). New cstate: current_term: 1 leader_uuid: "e258ceb9443b4cacb91a53c0eaba9385" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } health_report { overall_health: HEALTHY } } }
I20260430 07:54:34.547480  2548 tablet_bootstrap.cc:492] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385: No bootstrap required, opened a new log
I20260430 07:54:34.547876  2548 ts_tablet_manager.cc:1403] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385: Time spent bootstrapping tablet: real 0.009s	user 0.004s	sys 0.002s
I20260430 07:54:34.549415  2548 raft_consensus.cc:359] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.549633  2548 raft_consensus.cc:385] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:34.549753  2548 raft_consensus.cc:740] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Initialized, Role: FOLLOWER
I20260430 07:54:34.549983  2548 consensus_queue.cc:260] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.550163  2548 raft_consensus.cc:399] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:34.550249  2548 raft_consensus.cc:493] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:34.550362  2548 raft_consensus.cc:3060] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:34.552855  2548 raft_consensus.cc:515] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.553148  2548 leader_election.cc:304] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e258ceb9443b4cacb91a53c0eaba9385; no voters: 
I20260430 07:54:34.553434  2548 leader_election.cc:290] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:34.553527  2551 raft_consensus.cc:2804] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:34.553702  2551 raft_consensus.cc:697] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [term 1 LEADER]: Becoming Leader. State: Replica: e258ceb9443b4cacb91a53c0eaba9385, State: Running, Role: LEADER
I20260430 07:54:34.554021  2548 ts_tablet_manager.cc:1434] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385: Time spent starting tablet: real 0.006s	user 0.006s	sys 0.001s
I20260430 07:54:34.554049  2551 consensus_queue.cc:237] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:34.557291  2481 catalog_manager.cc:5671] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385 reported cstate change: term changed from 0 to 1, leader changed from <none> to e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1). New cstate: current_term: 1 leader_uuid: "e258ceb9443b4cacb91a53c0eaba9385" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } health_report { overall_health: HEALTHY } } }
I20260430 07:54:36.181841   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:42877
--local_ip_for_outbound_sockets=127.0.105.2
--tserver_master_addrs=127.0.105.62:40041
--webserver_port=41737
--webserver_interface=127.0.105.2
--builtin_ntp_servers=127.0.105.20:45163
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--num_tablets_to_copy_simultaneously=1 with env {}
W20260430 07:54:36.616036  2580 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:36.616465  2580 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:36.616619  2580 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:36.626070  2580 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:36.626338  2580 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:54:36.637571  2580 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:45163
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:42877
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=41737
--tserver_master_addrs=127.0.105.62:40041
--num_tablets_to_copy_simultaneously=1
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:36.639487  2580 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:36.641680  2580 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:36.653703  2586 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:36.654513  2585 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:36.654625  2588 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:36.656755  2580 server_base.cc:1061] running on GCE node
I20260430 07:54:36.657771  2580 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:36.659058  2580 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:36.660403  2580 hybrid_clock.cc:648] HybridClock initialized: now 1777535676660357 us; error 48 us; skew 500 ppm
I20260430 07:54:36.660909  2580 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:36.665293  2580 webserver.cc:492] Webserver started at http://127.0.105.2:41737/ using document root <none> and password file <none>
I20260430 07:54:36.666419  2580 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:36.666639  2580 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:36.674849  2580 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.003s	sys 0.001s
I20260430 07:54:36.678861  2594 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:36.680794  2580 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.001s
I20260430 07:54:36.680997  2580 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "b2b974e3ff49408bba2efdb1fcaacfe1"
format_stamp: "Formatted at 2026-04-30 07:54:32 on dist-test-slave-1g5s"
I20260430 07:54:36.682020  2580 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:36.705801  2580 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:36.706806  2580 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:36.707130  2580 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:36.708179  2580 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:36.710507  2580 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:36.710611  2580 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:36.710733  2580 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:36.710777  2580 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:36.756310  2580 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:42877
I20260430 07:54:36.756438  2706 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:42877 every 8 connection(s)
I20260430 07:54:36.758106  2580 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestTabletCopyThrottling.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:54:36.767385   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2580
I20260430 07:54:36.775215  2707 heartbeater.cc:344] Connected to a master server at 127.0.105.62:40041
I20260430 07:54:36.775693  2707 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:36.777158  2707 heartbeater.cc:507] Master 127.0.105.62:40041 requested a full tablet report, sending...
I20260430 07:54:36.779379  2481 ts_manager.cc:194] Registered new tserver with Master: b2b974e3ff49408bba2efdb1fcaacfe1 (127.0.105.2:42877)
I20260430 07:54:36.779982  2713 ts_tablet_manager.cc:933] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Initiating tablet copy from peer e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1:35727)
I20260430 07:54:36.780753  2481 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:44661
I20260430 07:54:36.781126  2713 tablet_copy_client.cc:323] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.1:35727
I20260430 07:54:36.788978  2281 tablet_copy_service.cc:140] P e258ceb9443b4cacb91a53c0eaba9385: Received BeginTabletCopySession request for tablet 9915e5b903db45249f9512369bcc088e from peer b2b974e3ff49408bba2efdb1fcaacfe1 ({username='slave'} at 127.0.105.2:60383)
I20260430 07:54:36.789283  2281 tablet_copy_service.cc:161] P e258ceb9443b4cacb91a53c0eaba9385: Beginning new tablet copy session on tablet 9915e5b903db45249f9512369bcc088e from peer b2b974e3ff49408bba2efdb1fcaacfe1 at {username='slave'} at 127.0.105.2:60383: session id = b2b974e3ff49408bba2efdb1fcaacfe1-9915e5b903db45249f9512369bcc088e
I20260430 07:54:36.792431  2281 tablet_copy_source_session.cc:215] T 9915e5b903db45249f9512369bcc088e P e258ceb9443b4cacb91a53c0eaba9385: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:36.799440  2713 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9915e5b903db45249f9512369bcc088e. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:36.811265  2713 tablet_copy_client.cc:806] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:36.811766  2713 tablet_copy_client.cc:670] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:36.817257  2713 tablet_copy_client.cc:538] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:36.821084  2713 tablet_bootstrap.cc:492] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap starting.
I20260430 07:54:36.901262  2713 log.cc:826] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:37.378664  2713 tablet_bootstrap.cc:492] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap replayed 1/1 log segments. Stats: ops{read=216 overwritten=0 applied=216 ignored=0} inserts{seen=2665 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:37.379357  2713 tablet_bootstrap.cc:492] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap complete.
I20260430 07:54:37.379976  2713 ts_tablet_manager.cc:1403] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent bootstrapping tablet: real 0.559s	user 0.504s	sys 0.054s
I20260430 07:54:37.385360  2713 raft_consensus.cc:359] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:37.385867  2713 raft_consensus.cc:740] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Becoming Follower/Learner. State: Replica: b2b974e3ff49408bba2efdb1fcaacfe1, State: Initialized, Role: NON_PARTICIPANT
I20260430 07:54:37.386633  2713 consensus_queue.cc:260] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 216, Last appended: 1.216, Last appended by leader: 216, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:37.388046  2707 heartbeater.cc:499] Master 127.0.105.62:40041 was elected leader, sending a full tablet report...
I20260430 07:54:37.389051  2713 ts_tablet_manager.cc:1434] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent starting tablet: real 0.009s	user 0.006s	sys 0.004s
I20260430 07:54:37.390339  2281 tablet_copy_service.cc:342] P e258ceb9443b4cacb91a53c0eaba9385: Request end of tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-9915e5b903db45249f9512369bcc088e received from {username='slave'} at 127.0.105.2:60383
I20260430 07:54:37.390722  2281 tablet_copy_service.cc:434] P e258ceb9443b4cacb91a53c0eaba9385: ending tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-9915e5b903db45249f9512369bcc088e on tablet 9915e5b903db45249f9512369bcc088e with peer b2b974e3ff49408bba2efdb1fcaacfe1
I20260430 07:54:37.393101  2713 ts_tablet_manager.cc:933] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Initiating tablet copy from peer e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1:35727)
I20260430 07:54:37.393792  2713 tablet_copy_client.cc:323] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.1:35727
I20260430 07:54:37.394538  2281 tablet_copy_service.cc:140] P e258ceb9443b4cacb91a53c0eaba9385: Received BeginTabletCopySession request for tablet f64ef8f482c54298b8d95baf1da82a05 from peer b2b974e3ff49408bba2efdb1fcaacfe1 ({username='slave'} at 127.0.105.2:60383)
I20260430 07:54:37.394733  2281 tablet_copy_service.cc:161] P e258ceb9443b4cacb91a53c0eaba9385: Beginning new tablet copy session on tablet f64ef8f482c54298b8d95baf1da82a05 from peer b2b974e3ff49408bba2efdb1fcaacfe1 at {username='slave'} at 127.0.105.2:60383: session id = b2b974e3ff49408bba2efdb1fcaacfe1-f64ef8f482c54298b8d95baf1da82a05
I20260430 07:54:37.397176  2281 tablet_copy_source_session.cc:215] T f64ef8f482c54298b8d95baf1da82a05 P e258ceb9443b4cacb91a53c0eaba9385: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:37.398595  2713 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f64ef8f482c54298b8d95baf1da82a05. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:37.403504  2713 tablet_copy_client.cc:806] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:37.403831  2713 tablet_copy_client.cc:670] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:37.407199  2713 tablet_copy_client.cc:538] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:37.410552  2713 tablet_bootstrap.cc:492] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap starting.
I20260430 07:54:37.843652  2713 tablet_bootstrap.cc:492] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap replayed 1/1 log segments. Stats: ops{read=216 overwritten=0 applied=216 ignored=0} inserts{seen=2715 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:37.844127  2713 tablet_bootstrap.cc:492] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap complete.
I20260430 07:54:37.844447  2713 ts_tablet_manager.cc:1403] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent bootstrapping tablet: real 0.434s	user 0.395s	sys 0.036s
I20260430 07:54:37.845525  2713 raft_consensus.cc:359] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:37.845763  2713 raft_consensus.cc:740] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Becoming Follower/Learner. State: Replica: b2b974e3ff49408bba2efdb1fcaacfe1, State: Initialized, Role: NON_PARTICIPANT
I20260430 07:54:37.845991  2713 consensus_queue.cc:260] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 216, Last appended: 1.216, Last appended by leader: 216, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:37.846508  2713 ts_tablet_manager.cc:1434] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent starting tablet: real 0.002s	user 0.004s	sys 0.000s
I20260430 07:54:37.847594  2281 tablet_copy_service.cc:342] P e258ceb9443b4cacb91a53c0eaba9385: Request end of tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-f64ef8f482c54298b8d95baf1da82a05 received from {username='slave'} at 127.0.105.2:60383
I20260430 07:54:37.847784  2281 tablet_copy_service.cc:434] P e258ceb9443b4cacb91a53c0eaba9385: ending tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-f64ef8f482c54298b8d95baf1da82a05 on tablet f64ef8f482c54298b8d95baf1da82a05 with peer b2b974e3ff49408bba2efdb1fcaacfe1
W20260430 07:54:37.850157  2713 ts_tablet_manager.cc:732] T f64ef8f482c54298b8d95baf1da82a05 P b2b974e3ff49408bba2efdb1fcaacfe1: Tablet Copy: Invalid argument: Leader has replica of tablet f64ef8f482c54298b8d95baf1da82a05 with term 0, which is lower than last-logged term 1 on local replica. Rejecting tablet copy request
W20260430 07:54:37.856828  2713 ts_tablet_manager.cc:732] T 9915e5b903db45249f9512369bcc088e P b2b974e3ff49408bba2efdb1fcaacfe1: Tablet Copy: Invalid argument: Leader has replica of tablet 9915e5b903db45249f9512369bcc088e with term 0, which is lower than last-logged term 1 on local replica. Rejecting tablet copy request
I20260430 07:54:37.860852  2713 ts_tablet_manager.cc:933] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Initiating tablet copy from peer e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1:35727)
I20260430 07:54:37.861898  2713 tablet_copy_client.cc:323] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.1:35727
I20260430 07:54:37.863250  2281 tablet_copy_service.cc:140] P e258ceb9443b4cacb91a53c0eaba9385: Received BeginTabletCopySession request for tablet bb462be3b7334c22a48430595838d3d1 from peer b2b974e3ff49408bba2efdb1fcaacfe1 ({username='slave'} at 127.0.105.2:60383)
I20260430 07:54:37.863544  2281 tablet_copy_service.cc:161] P e258ceb9443b4cacb91a53c0eaba9385: Beginning new tablet copy session on tablet bb462be3b7334c22a48430595838d3d1 from peer b2b974e3ff49408bba2efdb1fcaacfe1 at {username='slave'} at 127.0.105.2:60383: session id = b2b974e3ff49408bba2efdb1fcaacfe1-bb462be3b7334c22a48430595838d3d1
I20260430 07:54:37.865937  2281 tablet_copy_source_session.cc:215] T bb462be3b7334c22a48430595838d3d1 P e258ceb9443b4cacb91a53c0eaba9385: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:37.867707  2713 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bb462be3b7334c22a48430595838d3d1. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:37.873580  2713 tablet_copy_client.cc:806] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:37.873885  2713 tablet_copy_client.cc:670] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:37.877885  2713 tablet_copy_client.cc:538] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:37.881641  2713 tablet_bootstrap.cc:492] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap starting.
I20260430 07:54:38.311945  2713 tablet_bootstrap.cc:492] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap replayed 1/1 log segments. Stats: ops{read=216 overwritten=0 applied=216 ignored=0} inserts{seen=2663 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:38.312412  2713 tablet_bootstrap.cc:492] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap complete.
I20260430 07:54:38.312716  2713 ts_tablet_manager.cc:1403] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent bootstrapping tablet: real 0.431s	user 0.376s	sys 0.052s
I20260430 07:54:38.313522  2713 raft_consensus.cc:359] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:38.313710  2713 raft_consensus.cc:740] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Becoming Follower/Learner. State: Replica: b2b974e3ff49408bba2efdb1fcaacfe1, State: Initialized, Role: NON_PARTICIPANT
I20260430 07:54:38.313939  2713 consensus_queue.cc:260] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 216, Last appended: 1.216, Last appended by leader: 216, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:38.314452  2713 ts_tablet_manager.cc:1434] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent starting tablet: real 0.002s	user 0.000s	sys 0.004s
I20260430 07:54:38.315335  2281 tablet_copy_service.cc:342] P e258ceb9443b4cacb91a53c0eaba9385: Request end of tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-bb462be3b7334c22a48430595838d3d1 received from {username='slave'} at 127.0.105.2:60383
I20260430 07:54:38.315524  2281 tablet_copy_service.cc:434] P e258ceb9443b4cacb91a53c0eaba9385: ending tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-bb462be3b7334c22a48430595838d3d1 on tablet bb462be3b7334c22a48430595838d3d1 with peer b2b974e3ff49408bba2efdb1fcaacfe1
W20260430 07:54:38.320498  2713 ts_tablet_manager.cc:732] T bb462be3b7334c22a48430595838d3d1 P b2b974e3ff49408bba2efdb1fcaacfe1: Tablet Copy: Invalid argument: Leader has replica of tablet bb462be3b7334c22a48430595838d3d1 with term 0, which is lower than last-logged term 1 on local replica. Rejecting tablet copy request
I20260430 07:54:38.324540  2713 ts_tablet_manager.cc:933] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Initiating tablet copy from peer e258ceb9443b4cacb91a53c0eaba9385 (127.0.105.1:35727)
I20260430 07:54:38.327229  2713 tablet_copy_client.cc:323] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.1:35727
I20260430 07:54:38.329294  2281 tablet_copy_service.cc:140] P e258ceb9443b4cacb91a53c0eaba9385: Received BeginTabletCopySession request for tablet deae96bf702e425a8fdd5490bbd7de0a from peer b2b974e3ff49408bba2efdb1fcaacfe1 ({username='slave'} at 127.0.105.2:60383)
I20260430 07:54:38.329533  2281 tablet_copy_service.cc:161] P e258ceb9443b4cacb91a53c0eaba9385: Beginning new tablet copy session on tablet deae96bf702e425a8fdd5490bbd7de0a from peer b2b974e3ff49408bba2efdb1fcaacfe1 at {username='slave'} at 127.0.105.2:60383: session id = b2b974e3ff49408bba2efdb1fcaacfe1-deae96bf702e425a8fdd5490bbd7de0a
I20260430 07:54:38.332160  2281 tablet_copy_source_session.cc:215] T deae96bf702e425a8fdd5490bbd7de0a P e258ceb9443b4cacb91a53c0eaba9385: Tablet Copy: opened 0 blocks and 1 log segments
I20260430 07:54:38.334612  2713 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet deae96bf702e425a8fdd5490bbd7de0a. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:38.340909  2713 tablet_copy_client.cc:806] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 0 data blocks...
I20260430 07:54:38.341735  2713 tablet_copy_client.cc:670] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Starting download of 1 WAL segments...
I20260430 07:54:38.345786  2713 tablet_copy_client.cc:538] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:54:38.349305  2713 tablet_bootstrap.cc:492] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap starting.
I20260430 07:54:38.806079  2713 tablet_bootstrap.cc:492] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap replayed 1/1 log segments. Stats: ops{read=216 overwritten=0 applied=216 ignored=0} inserts{seen=2707 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:54:38.806592  2713 tablet_bootstrap.cc:492] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Bootstrap complete.
I20260430 07:54:38.806922  2713 ts_tablet_manager.cc:1403] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent bootstrapping tablet: real 0.458s	user 0.411s	sys 0.048s
I20260430 07:54:38.807703  2713 raft_consensus.cc:359] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:38.807860  2713 raft_consensus.cc:740] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1 [term 1 NON_PARTICIPANT]: Becoming Follower/Learner. State: Replica: b2b974e3ff49408bba2efdb1fcaacfe1, State: Initialized, Role: NON_PARTICIPANT
I20260430 07:54:38.808115  2713 consensus_queue.cc:260] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 216, Last appended: 1.216, Last appended by leader: 216, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e258ceb9443b4cacb91a53c0eaba9385" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 35727 } }
I20260430 07:54:38.808598  2713 ts_tablet_manager.cc:1434] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Time spent starting tablet: real 0.002s	user 0.000s	sys 0.000s
I20260430 07:54:38.809834  2281 tablet_copy_service.cc:342] P e258ceb9443b4cacb91a53c0eaba9385: Request end of tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-deae96bf702e425a8fdd5490bbd7de0a received from {username='slave'} at 127.0.105.2:60383
I20260430 07:54:38.810104  2281 tablet_copy_service.cc:434] P e258ceb9443b4cacb91a53c0eaba9385: ending tablet copy session b2b974e3ff49408bba2efdb1fcaacfe1-deae96bf702e425a8fdd5490bbd7de0a on tablet deae96bf702e425a8fdd5490bbd7de0a with peer b2b974e3ff49408bba2efdb1fcaacfe1
W20260430 07:54:38.815953  2713 ts_tablet_manager.cc:732] T deae96bf702e425a8fdd5490bbd7de0a P b2b974e3ff49408bba2efdb1fcaacfe1: Tablet Copy: Invalid argument: Leader has replica of tablet deae96bf702e425a8fdd5490bbd7de0a with term 0, which is lower than last-logged term 1 on local replica. Rejecting tablet copy request
I20260430 07:54:38.820554   420 tablet_copy-itest.cc:1252] Number of Service unavailable responses: 498
I20260430 07:54:38.820760   420 tablet_copy-itest.cc:1253] Number of in progress responses: 377
I20260430 07:54:38.825430   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2179
I20260430 07:54:38.827428  2301 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:39.013013   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2179
I20260430 07:54:39.040779   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2580
I20260430 07:54:39.042665  2702 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:39.194450   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2580
I20260430 07:54:39.220084   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2450
I20260430 07:54:39.221665  2511 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:54:39.358680   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2450
2026-04-30T07:54:39Z chronyd exiting
[       OK ] TabletCopyITest.TestTabletCopyThrottling (8808 ms)
[ RUN      ] TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate
2026-04-30T07:54:39Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2026-04-30T07:54:39Z Disabled control of system clock
I20260430 07:54:39.471928   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:36957
--webserver_interface=127.0.105.62
--webserver_port=0
--builtin_ntp_servers=127.0.105.20:38709
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:36957 with env {}
W20260430 07:54:39.972025  2730 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:39.972738  2730 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:39.972837  2730 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:39.984802  2730 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:54:39.984939  2730 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:39.985000  2730 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:54:39.985044  2730 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:54:40.003903  2730 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38709
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:36957
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:36957
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:40.006389  2730 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:40.011106  2730 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:40.031680  2735 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:40.031680  2736 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:40.034929  2738 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:40.037225  2730 server_base.cc:1061] running on GCE node
I20260430 07:54:40.038759  2730 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:40.041549  2730 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:40.043736  2730 hybrid_clock.cc:648] HybridClock initialized: now 1777535680043309 us; error 78 us; skew 500 ppm
I20260430 07:54:40.044587  2730 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:40.050217  2730 webserver.cc:492] Webserver started at http://127.0.105.62:40161/ using document root <none> and password file <none>
I20260430 07:54:40.051222  2730 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:40.051354  2730 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:40.051735  2730 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:40.055718  2730 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data/instance:
uuid: "738a284d906e4138b27ad5f7e3d58bae"
format_stamp: "Formatted at 2026-04-30 07:54:40 on dist-test-slave-1g5s"
I20260430 07:54:40.056838  2730 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal/instance:
uuid: "738a284d906e4138b27ad5f7e3d58bae"
format_stamp: "Formatted at 2026-04-30 07:54:40 on dist-test-slave-1g5s"
I20260430 07:54:40.063817  2730 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.000s
I20260430 07:54:40.068841  2744 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:40.071681  2730 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.002s	sys 0.001s
I20260430 07:54:40.071950  2730 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "738a284d906e4138b27ad5f7e3d58bae"
format_stamp: "Formatted at 2026-04-30 07:54:40 on dist-test-slave-1g5s"
I20260430 07:54:40.072212  2730 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:40.092270  2730 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:40.093384  2730 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:40.093715  2730 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:40.122763  2730 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:36957
I20260430 07:54:40.122745  2795 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:36957 every 8 connection(s)
I20260430 07:54:40.125283  2730 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:54:40.127475   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2730
I20260430 07:54:40.127739   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/master-0/wal/instance
I20260430 07:54:40.132107  2796 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:40.149173  2796 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae: Bootstrap starting.
I20260430 07:54:40.154522  2796 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:40.156068  2796 log.cc:826] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:40.160105  2796 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae: No bootstrap required, opened a new log
I20260430 07:54:40.166313  2796 raft_consensus.cc:359] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "738a284d906e4138b27ad5f7e3d58bae" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 36957 } }
I20260430 07:54:40.166798  2796 raft_consensus.cc:385] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:40.166952  2796 raft_consensus.cc:740] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 738a284d906e4138b27ad5f7e3d58bae, State: Initialized, Role: FOLLOWER
I20260430 07:54:40.167966  2796 consensus_queue.cc:260] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "738a284d906e4138b27ad5f7e3d58bae" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 36957 } }
I20260430 07:54:40.168279  2796 raft_consensus.cc:399] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:54:40.168468  2796 raft_consensus.cc:493] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:54:40.168701  2796 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:40.171871  2796 raft_consensus.cc:515] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "738a284d906e4138b27ad5f7e3d58bae" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 36957 } }
I20260430 07:54:40.172603  2796 leader_election.cc:304] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 738a284d906e4138b27ad5f7e3d58bae; no voters: 
I20260430 07:54:40.173820  2801 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:40.173755  2796 leader_election.cc:290] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:54:40.177851  2801 raft_consensus.cc:697] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [term 1 LEADER]: Becoming Leader. State: Replica: 738a284d906e4138b27ad5f7e3d58bae, State: Running, Role: LEADER
I20260430 07:54:40.179039  2796 sys_catalog.cc:565] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:54:40.178990  2801 consensus_queue.cc:237] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "738a284d906e4138b27ad5f7e3d58bae" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 36957 } }
I20260430 07:54:40.186844  2803 sys_catalog.cc:455] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "738a284d906e4138b27ad5f7e3d58bae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "738a284d906e4138b27ad5f7e3d58bae" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 36957 } } }
I20260430 07:54:40.187621  2803 sys_catalog.cc:458] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:40.188524  2802 sys_catalog.cc:455] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [sys.catalog]: SysCatalogTable state changed. Reason: New leader 738a284d906e4138b27ad5f7e3d58bae. Latest consensus state: current_term: 1 leader_uuid: "738a284d906e4138b27ad5f7e3d58bae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "738a284d906e4138b27ad5f7e3d58bae" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 36957 } } }
I20260430 07:54:40.188954  2802 sys_catalog.cc:458] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae [sys.catalog]: This master's current role is: LEADER
I20260430 07:54:40.199082  2808 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:54:40.205863  2808 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:54:40.218930  2808 catalog_manager.cc:1357] Generated new cluster ID: 0c914b3123454b9ca9024242c4dcb83c
I20260430 07:54:40.219144  2808 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:54:40.236876  2808 catalog_manager.cc:1380] Generated new certificate authority record
I20260430 07:54:40.238675  2808 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:54:40.251550  2808 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 738a284d906e4138b27ad5f7e3d58bae: Generated new TSK 0
I20260430 07:54:40.252786  2808 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:54:40.260528   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:0
--local_ip_for_outbound_sockets=127.0.105.1
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:36957
--builtin_ntp_servers=127.0.105.20:38709
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_flush_memrowset=false
--enable_flush_deltamemstores=false
--tablet_copy_download_threads_nums_per_session=4
--log_segment_size_mb=1 with env {}
W20260430 07:54:40.728729  2820 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:40.729676  2820 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:40.729844  2820 flags.cc:432] Enabled unsafe flag: --enable_flush_deltamemstores=false
W20260430 07:54:40.729944  2820 flags.cc:432] Enabled unsafe flag: --enable_flush_memrowset=false
W20260430 07:54:40.730053  2820 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:40.742714  2820 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:40.743000  2820 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:54:40.759760  2820 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38709
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_segment_size_mb=1
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=0
--enable_flush_deltamemstores=false
--enable_flush_memrowset=false
--tserver_master_addrs=127.0.105.62:36957
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:40.762084  2820 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:40.764161  2820 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:40.778543  2826 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:40.778795  2825 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:40.779853  2828 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:40.781970  2820 server_base.cc:1061] running on GCE node
I20260430 07:54:40.783010  2820 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:40.784124  2820 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:40.785383  2820 hybrid_clock.cc:648] HybridClock initialized: now 1777535680785309 us; error 78 us; skew 500 ppm
I20260430 07:54:40.785773  2820 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:40.788913  2820 webserver.cc:492] Webserver started at http://127.0.105.1:41889/ using document root <none> and password file <none>
I20260430 07:54:40.790092  2820 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:40.790254  2820 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:40.790901  2820 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:40.793776  2820 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data/instance:
uuid: "9a1d89563949413caa9bd86e37b2dcfc"
format_stamp: "Formatted at 2026-04-30 07:54:40 on dist-test-slave-1g5s"
I20260430 07:54:40.794672  2820 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal/instance:
uuid: "9a1d89563949413caa9bd86e37b2dcfc"
format_stamp: "Formatted at 2026-04-30 07:54:40 on dist-test-slave-1g5s"
I20260430 07:54:40.801520  2820 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.007s	sys 0.001s
I20260430 07:54:40.806548  2834 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:40.808727  2820 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.005s	sys 0.000s
I20260430 07:54:40.808959  2820 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "9a1d89563949413caa9bd86e37b2dcfc"
format_stamp: "Formatted at 2026-04-30 07:54:40 on dist-test-slave-1g5s"
I20260430 07:54:40.809265  2820 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:40.833729  2820 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:40.834754  2820 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:40.835093  2820 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:40.836440  2820 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:40.838708  2820 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:40.838884  2820 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:40.839037  2820 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:40.839129  2820 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:40.888633  2820 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:42857
I20260430 07:54:40.888701  2946 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:42857 every 8 connection(s)
I20260430 07:54:40.890738  2820 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:54:40.901332   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2820
I20260430 07:54:40.901546   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-0/wal/instance
I20260430 07:54:40.910193   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:0
--local_ip_for_outbound_sockets=127.0.105.2
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:36957
--builtin_ntp_servers=127.0.105.20:38709
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_flush_memrowset=false
--enable_flush_deltamemstores=false
--tablet_copy_download_threads_nums_per_session=4
--log_segment_size_mb=1 with env {}
I20260430 07:54:40.918602  2947 heartbeater.cc:344] Connected to a master server at 127.0.105.62:36957
I20260430 07:54:40.919088  2947 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:40.921010  2947 heartbeater.cc:507] Master 127.0.105.62:36957 requested a full tablet report, sending...
I20260430 07:54:40.924957  2761 ts_manager.cc:194] Registered new tserver with Master: 9a1d89563949413caa9bd86e37b2dcfc (127.0.105.1:42857)
I20260430 07:54:40.927287  2761 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:58277
W20260430 07:54:41.452081  2951 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:41.452451  2951 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:41.452615  2951 flags.cc:432] Enabled unsafe flag: --enable_flush_deltamemstores=false
W20260430 07:54:41.452698  2951 flags.cc:432] Enabled unsafe flag: --enable_flush_memrowset=false
W20260430 07:54:41.452791  2951 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:41.462421  2951 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:41.462639  2951 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:54:41.478266  2951 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38709
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_segment_size_mb=1
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=0
--enable_flush_deltamemstores=false
--enable_flush_memrowset=false
--tserver_master_addrs=127.0.105.62:36957
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:41.481819  2951 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:41.484194  2951 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:41.501657  2956 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:41.501636  2957 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:41.503010  2959 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:41.505636  2951 server_base.cc:1061] running on GCE node
I20260430 07:54:41.506522  2951 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:41.507789  2951 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:41.509429  2951 hybrid_clock.cc:648] HybridClock initialized: now 1777535681509343 us; error 89 us; skew 500 ppm
I20260430 07:54:41.509827  2951 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:41.512795  2951 webserver.cc:492] Webserver started at http://127.0.105.2:45073/ using document root <none> and password file <none>
I20260430 07:54:41.514283  2951 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:41.514619  2951 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:41.515355  2951 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:41.518939  2951 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data/instance:
uuid: "e1c76ca1eeb8497091405e8838166d4c"
format_stamp: "Formatted at 2026-04-30 07:54:41 on dist-test-slave-1g5s"
I20260430 07:54:41.520123  2951 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal/instance:
uuid: "e1c76ca1eeb8497091405e8838166d4c"
format_stamp: "Formatted at 2026-04-30 07:54:41 on dist-test-slave-1g5s"
I20260430 07:54:41.529772  2951 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.011s	sys 0.000s
I20260430 07:54:41.536399  2965 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:41.539716  2951 fs_manager.cc:730] Time spent opening block manager: real 0.006s	user 0.002s	sys 0.000s
I20260430 07:54:41.540139  2951 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "e1c76ca1eeb8497091405e8838166d4c"
format_stamp: "Formatted at 2026-04-30 07:54:41 on dist-test-slave-1g5s"
I20260430 07:54:41.540467  2951 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:41.564787  2951 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:41.566009  2951 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:41.566453  2951 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:41.568225  2951 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:41.570530  2951 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:41.570689  2951 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:41.570886  2951 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:41.570997  2951 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:41.617749  2951 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:41067
I20260430 07:54:41.617820  3077 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:41067 every 8 connection(s)
I20260430 07:54:41.619892  2951 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:54:41.621960   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 2951
I20260430 07:54:41.622201   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-1/wal/instance
I20260430 07:54:41.627066   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.3:0
--local_ip_for_outbound_sockets=127.0.105.3
--webserver_interface=127.0.105.3
--webserver_port=0
--tserver_master_addrs=127.0.105.62:36957
--builtin_ntp_servers=127.0.105.20:38709
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_flush_memrowset=false
--enable_flush_deltamemstores=false
--tablet_copy_download_threads_nums_per_session=4
--log_segment_size_mb=1 with env {}
I20260430 07:54:41.638389  3078 heartbeater.cc:344] Connected to a master server at 127.0.105.62:36957
I20260430 07:54:41.638840  3078 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:41.639851  3078 heartbeater.cc:507] Master 127.0.105.62:36957 requested a full tablet report, sending...
I20260430 07:54:41.641887  2761 ts_manager.cc:194] Registered new tserver with Master: e1c76ca1eeb8497091405e8838166d4c (127.0.105.2:41067)
I20260430 07:54:41.643220  2761 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:34201
I20260430 07:54:41.931475  2947 heartbeater.cc:499] Master 127.0.105.62:36957 was elected leader, sending a full tablet report...
W20260430 07:54:42.181416  3082 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:54:42.181881  3082 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:54:42.182063  3082 flags.cc:432] Enabled unsafe flag: --enable_flush_deltamemstores=false
W20260430 07:54:42.182188  3082 flags.cc:432] Enabled unsafe flag: --enable_flush_memrowset=false
W20260430 07:54:42.182302  3082 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:54:42.203071  3082 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:54:42.203307  3082 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.3
I20260430 07:54:42.223703  3082 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38709
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_segment_size_mb=1
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.105.3
--webserver_port=0
--enable_flush_deltamemstores=false
--enable_flush_memrowset=false
--tserver_master_addrs=127.0.105.62:36957
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.3
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:54:42.225630  3082 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:54:42.230233  3082 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:54:42.251567  3088 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:42.253602  3087 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:54:42.255846  3090 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:54:42.259298  3082 server_base.cc:1061] running on GCE node
I20260430 07:54:42.260282  3082 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:54:42.262333  3082 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:54:42.263690  3082 hybrid_clock.cc:648] HybridClock initialized: now 1777535682263608 us; error 70 us; skew 500 ppm
I20260430 07:54:42.264303  3082 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:54:42.271262  3082 webserver.cc:492] Webserver started at http://127.0.105.3:40645/ using document root <none> and password file <none>
I20260430 07:54:42.272339  3082 fs_manager.cc:362] Metadata directory not provided
I20260430 07:54:42.272533  3082 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:54:42.272977  3082 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:54:42.282922  3082 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data/instance:
uuid: "f0a630514a8e403c89769e0367de2852"
format_stamp: "Formatted at 2026-04-30 07:54:42 on dist-test-slave-1g5s"
I20260430 07:54:42.284896  3082 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal/instance:
uuid: "f0a630514a8e403c89769e0367de2852"
format_stamp: "Formatted at 2026-04-30 07:54:42 on dist-test-slave-1g5s"
I20260430 07:54:42.302212  3082 fs_manager.cc:696] Time spent creating directory manager: real 0.013s	user 0.009s	sys 0.004s
I20260430 07:54:42.308385  3096 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:42.311097  3082 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.003s	sys 0.000s
I20260430 07:54:42.311466  3082 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal
uuid: "f0a630514a8e403c89769e0367de2852"
format_stamp: "Formatted at 2026-04-30 07:54:42 on dist-test-slave-1g5s"
I20260430 07:54:42.311750  3082 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:54:42.356626  3082 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:54:42.357750  3082 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:54:42.358150  3082 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:54:42.360404  3082 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:54:42.363483  3082 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:54:42.363669  3082 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:42.363834  3082 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:54:42.364842  3082 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:54:42.467697  3082 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.3:41071
I20260430 07:54:42.467880  3208 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.3:41071 every 8 connection(s)
I20260430 07:54:42.469661  3082 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
I20260430 07:54:42.471553   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 3082
I20260430 07:54:42.471755   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0/minicluster-data/ts-2/wal/instance
I20260430 07:54:42.500176  3209 heartbeater.cc:344] Connected to a master server at 127.0.105.62:36957
I20260430 07:54:42.500598  3209 heartbeater.cc:461] Registering TS with master...
I20260430 07:54:42.501793  3209 heartbeater.cc:507] Master 127.0.105.62:36957 requested a full tablet report, sending...
I20260430 07:54:42.506934  2761 ts_manager.cc:194] Registered new tserver with Master: f0a630514a8e403c89769e0367de2852 (127.0.105.3:41071)
I20260430 07:54:42.508786  2761 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.3:42953
I20260430 07:54:42.512624   420 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20260430 07:54:42.575861  2761 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39728:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20260430 07:54:42.578467  2761 catalog_manager.cc:7033] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20260430 07:54:42.648715  3078 heartbeater.cc:499] Master 127.0.105.62:36957 was elected leader, sending a full tablet report...
I20260430 07:54:42.665378  3013 tablet_service.cc:1511] Processing CreateTablet for tablet f2ae0bbdb8b745c78286580261322e45 (DEFAULT_TABLE table=test-workload [id=07745942cc3d47a08b9070f51acf51dd]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:42.667027  2882 tablet_service.cc:1511] Processing CreateTablet for tablet f2ae0bbdb8b745c78286580261322e45 (DEFAULT_TABLE table=test-workload [id=07745942cc3d47a08b9070f51acf51dd]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:42.670895  3013 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f2ae0bbdb8b745c78286580261322e45. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:42.667608  3144 tablet_service.cc:1511] Processing CreateTablet for tablet f2ae0bbdb8b745c78286580261322e45 (DEFAULT_TABLE table=test-workload [id=07745942cc3d47a08b9070f51acf51dd]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:54:42.672180  2882 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f2ae0bbdb8b745c78286580261322e45. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:42.672628  3144 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f2ae0bbdb8b745c78286580261322e45. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:54:42.698967  3233 tablet_bootstrap.cc:492] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: Bootstrap starting.
I20260430 07:54:42.710316  3233 tablet_bootstrap.cc:654] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:42.711104  3234 tablet_bootstrap.cc:492] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: Bootstrap starting.
I20260430 07:54:42.710528  3235 tablet_bootstrap.cc:492] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: Bootstrap starting.
I20260430 07:54:42.713014  3233 log.cc:826] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:42.716678  3234 tablet_bootstrap.cc:654] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:42.718081  3234 log.cc:826] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:42.718565  3235 tablet_bootstrap.cc:654] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: Neither blocks nor log segments found. Creating new log.
I20260430 07:54:42.719841  3233 tablet_bootstrap.cc:492] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: No bootstrap required, opened a new log
I20260430 07:54:42.720363  3233 ts_tablet_manager.cc:1403] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: Time spent bootstrapping tablet: real 0.022s	user 0.006s	sys 0.009s
I20260430 07:54:42.720721  3235 log.cc:826] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: Log is configured to *not* fsync() on all Append() calls
I20260430 07:54:42.727981  3234 tablet_bootstrap.cc:492] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: No bootstrap required, opened a new log
I20260430 07:54:42.728471  3234 ts_tablet_manager.cc:1403] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: Time spent bootstrapping tablet: real 0.018s	user 0.005s	sys 0.007s
I20260430 07:54:42.728874  3235 tablet_bootstrap.cc:492] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: No bootstrap required, opened a new log
I20260430 07:54:42.731974  3235 ts_tablet_manager.cc:1403] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: Time spent bootstrapping tablet: real 0.022s	user 0.018s	sys 0.000s
I20260430 07:54:42.734088  3233 raft_consensus.cc:359] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:42.734797  3233 raft_consensus.cc:385] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:42.735018  3233 raft_consensus.cc:740] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1c76ca1eeb8497091405e8838166d4c, State: Initialized, Role: FOLLOWER
I20260430 07:54:42.736102  3233 consensus_queue.cc:260] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:42.739538  3233 ts_tablet_manager.cc:1434] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: Time spent starting tablet: real 0.019s	user 0.013s	sys 0.004s
I20260430 07:54:42.743768  3234 raft_consensus.cc:359] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:42.745098  3234 raft_consensus.cc:385] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:42.745268  3234 raft_consensus.cc:740] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9a1d89563949413caa9bd86e37b2dcfc, State: Initialized, Role: FOLLOWER
I20260430 07:54:42.744943  3235 raft_consensus.cc:359] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:42.745503  3235 raft_consensus.cc:385] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:54:42.747887  3235 raft_consensus.cc:740] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f0a630514a8e403c89769e0367de2852, State: Initialized, Role: FOLLOWER
I20260430 07:54:42.749634  3235 consensus_queue.cc:260] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:42.749540  3234 consensus_queue.cc:260] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:42.754520  3209 heartbeater.cc:499] Master 127.0.105.62:36957 was elected leader, sending a full tablet report...
I20260430 07:54:42.760618  3235 ts_tablet_manager.cc:1434] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: Time spent starting tablet: real 0.028s	user 0.024s	sys 0.000s
I20260430 07:54:42.766534  3234 ts_tablet_manager.cc:1434] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: Time spent starting tablet: real 0.037s	user 0.024s	sys 0.009s
W20260430 07:54:42.876387  3079 tablet.cc:2404] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20260430 07:54:42.897387  2948 tablet.cc:2404] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20260430 07:54:42.973968  3210 tablet.cc:2404] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:54:43.114504  3239 raft_consensus.cc:493] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:54:43.115235  3239 raft_consensus.cc:515] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:43.118930  3239 leader_election.cc:290] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers f0a630514a8e403c89769e0367de2852 (127.0.105.3:41071), 9a1d89563949413caa9bd86e37b2dcfc (127.0.105.1:42857)
I20260430 07:54:43.130654  3164 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45" candidate_uuid: "e1c76ca1eeb8497091405e8838166d4c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f0a630514a8e403c89769e0367de2852" is_pre_election: true
I20260430 07:54:43.130738  2902 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45" candidate_uuid: "e1c76ca1eeb8497091405e8838166d4c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9a1d89563949413caa9bd86e37b2dcfc" is_pre_election: true
I20260430 07:54:43.131276  3164 raft_consensus.cc:2468] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1c76ca1eeb8497091405e8838166d4c in term 0.
I20260430 07:54:43.131279  2902 raft_consensus.cc:2468] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1c76ca1eeb8497091405e8838166d4c in term 0.
I20260430 07:54:43.132376  2968 leader_election.cc:304] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9a1d89563949413caa9bd86e37b2dcfc, e1c76ca1eeb8497091405e8838166d4c; no voters: 
I20260430 07:54:43.132901  3239 raft_consensus.cc:2804] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Leader pre-election won for term 1
I20260430 07:54:43.133414  3239 raft_consensus.cc:493] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:54:43.133566  3239 raft_consensus.cc:3060] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:43.136991  3239 raft_consensus.cc:515] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:43.139356  3239 leader_election.cc:290] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [CANDIDATE]: Term 1 election: Requested vote from peers f0a630514a8e403c89769e0367de2852 (127.0.105.3:41071), 9a1d89563949413caa9bd86e37b2dcfc (127.0.105.1:42857)
I20260430 07:54:43.139436  3164 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45" candidate_uuid: "e1c76ca1eeb8497091405e8838166d4c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f0a630514a8e403c89769e0367de2852"
I20260430 07:54:43.139779  3164 raft_consensus.cc:3060] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:43.140894  2902 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45" candidate_uuid: "e1c76ca1eeb8497091405e8838166d4c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9a1d89563949413caa9bd86e37b2dcfc"
I20260430 07:54:43.142585  2902 raft_consensus.cc:3060] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:54:43.142665  3164 raft_consensus.cc:2468] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1c76ca1eeb8497091405e8838166d4c in term 1.
I20260430 07:54:43.144924  2967 leader_election.cc:304] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: e1c76ca1eeb8497091405e8838166d4c, f0a630514a8e403c89769e0367de2852; no voters: 
I20260430 07:54:43.146829  3239 raft_consensus.cc:2804] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:54:43.147624  3239 raft_consensus.cc:697] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 1 LEADER]: Becoming Leader. State: Replica: e1c76ca1eeb8497091405e8838166d4c, State: Running, Role: LEADER
I20260430 07:54:43.148689  3239 consensus_queue.cc:237] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:43.149966  2902 raft_consensus.cc:2468] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1c76ca1eeb8497091405e8838166d4c in term 1.
I20260430 07:54:43.156786  2760 catalog_manager.cc:5671] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c reported cstate change: term changed from 0 to 1, leader changed from <none> to e1c76ca1eeb8497091405e8838166d4c (127.0.105.2). New cstate: current_term: 1 leader_uuid: "e1c76ca1eeb8497091405e8838166d4c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } health_report { overall_health: HEALTHY } } }
I20260430 07:54:43.247071  2902 raft_consensus.cc:1275] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 1 FOLLOWER]: Refusing update from remote peer e1c76ca1eeb8497091405e8838166d4c: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:54:43.247072  3164 raft_consensus.cc:1275] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 1 FOLLOWER]: Refusing update from remote peer e1c76ca1eeb8497091405e8838166d4c: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20260430 07:54:43.248847  3244 consensus_queue.cc:1048] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [LEADER]: Connected to new peer: Peer: permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:54:43.249629  3239 consensus_queue.cc:1048] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [LEADER]: Connected to new peer: Peer: permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:54:43.277029  3252 mvcc.cc:204] Tried to move back new op lower bound from 7280786158565199872 to 7280786158184316928. Current Snapshot: MvccSnapshot[applied={T|T < 7280786158565199872}]
I20260430 07:54:43.363459  2902 tablet_service.cc:2044] Received Run Leader Election RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45"
dest_uuid: "9a1d89563949413caa9bd86e37b2dcfc"
 from {username='slave'} at 127.0.0.1:38570
I20260430 07:54:43.363775  2902 raft_consensus.cc:493] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20260430 07:54:43.363899  2902 raft_consensus.cc:3060] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:43.373147  2902 raft_consensus.cc:515] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:43.376210  2902 leader_election.cc:290] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [CANDIDATE]: Term 2 election: Requested vote from peers f0a630514a8e403c89769e0367de2852 (127.0.105.3:41071), e1c76ca1eeb8497091405e8838166d4c (127.0.105.2:41067)
I20260430 07:54:43.407706  3164 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45" candidate_uuid: "9a1d89563949413caa9bd86e37b2dcfc" candidate_term: 2 candidate_status { last_received { term: 1 index: 2 } } ignore_live_leader: true dest_uuid: "f0a630514a8e403c89769e0367de2852"
I20260430 07:54:43.408092  3164 raft_consensus.cc:3060] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:43.412811  3164 raft_consensus.cc:2468] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9a1d89563949413caa9bd86e37b2dcfc in term 2.
I20260430 07:54:43.414214  2836 leader_election.cc:304] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9a1d89563949413caa9bd86e37b2dcfc, f0a630514a8e403c89769e0367de2852; no voters: 
I20260430 07:54:43.414371  3033 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "f2ae0bbdb8b745c78286580261322e45" candidate_uuid: "9a1d89563949413caa9bd86e37b2dcfc" candidate_term: 2 candidate_status { last_received { term: 1 index: 2 } } ignore_live_leader: true dest_uuid: "e1c76ca1eeb8497091405e8838166d4c"
I20260430 07:54:43.415405  3240 raft_consensus.cc:2804] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 2 FOLLOWER]: Leader election won for term 2
I20260430 07:54:43.416055  3033 raft_consensus.cc:3055] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 1 LEADER]: Stepping down as leader of term 1
I20260430 07:54:43.416668  3033 raft_consensus.cc:740] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 1 LEADER]: Becoming Follower/Learner. State: Replica: e1c76ca1eeb8497091405e8838166d4c, State: Running, Role: LEADER
I20260430 07:54:43.417805  3033 consensus_queue.cc:260] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:43.418570  3033 raft_consensus.cc:3060] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:54:43.422498  3033 raft_consensus.cc:2468] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9a1d89563949413caa9bd86e37b2dcfc in term 2.
I20260430 07:54:43.431535  3240 raft_consensus.cc:697] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [term 2 LEADER]: Becoming Leader. State: Replica: 9a1d89563949413caa9bd86e37b2dcfc, State: Running, Role: LEADER
I20260430 07:54:43.433531  3240 consensus_queue.cc:237] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } }
I20260430 07:54:43.440824  2761 catalog_manager.cc:5671] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc reported cstate change: term changed from 1 to 2, leader changed from e1c76ca1eeb8497091405e8838166d4c (127.0.105.2) to 9a1d89563949413caa9bd86e37b2dcfc (127.0.105.1). New cstate: current_term: 2 leader_uuid: "9a1d89563949413caa9bd86e37b2dcfc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9a1d89563949413caa9bd86e37b2dcfc" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 42857 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 } health_report { overall_health: UNKNOWN } } }
W20260430 07:54:43.449298  2993 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:57012: Illegal state: replica e1c76ca1eeb8497091405e8838166d4c is not leader of this config: current role FOLLOWER
W20260430 07:54:43.449390  2992 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:57012: Illegal state: replica e1c76ca1eeb8497091405e8838166d4c is not leader of this config: current role FOLLOWER
W20260430 07:54:43.455464  2992 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:57012: Illegal state: replica e1c76ca1eeb8497091405e8838166d4c is not leader of this config: current role FOLLOWER
W20260430 07:54:43.471570  3119 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.471410  3123 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.471606  3120 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.472913  3121 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.475862  3118 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.477433  3122 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.477762  3117 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
W20260430 07:54:43.478528  3124 tablet_service.cc:731] failed op from {username='slave'} at 127.0.0.1:32786: Illegal state: replica f0a630514a8e403c89769e0367de2852 is not leader of this config: current role FOLLOWER
I20260430 07:54:43.495339  3164 raft_consensus.cc:1275] T f2ae0bbdb8b745c78286580261322e45 P f0a630514a8e403c89769e0367de2852 [term 2 FOLLOWER]: Refusing update from remote peer 9a1d89563949413caa9bd86e37b2dcfc: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20260430 07:54:43.495424  3033 raft_consensus.cc:1275] T f2ae0bbdb8b745c78286580261322e45 P e1c76ca1eeb8497091405e8838166d4c [term 2 FOLLOWER]: Refusing update from remote peer 9a1d89563949413caa9bd86e37b2dcfc: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20260430 07:54:43.496162  3271 consensus_queue.cc:1048] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [LEADER]: Connected to new peer: Peer: permanent_uuid: "f0a630514a8e403c89769e0367de2852" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 41071 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20260430 07:54:43.496600  3240 consensus_queue.cc:1048] T f2ae0bbdb8b745c78286580261322e45 P 9a1d89563949413caa9bd86e37b2dcfc [LEADER]: Connected to new peer: Peer: permanent_uuid: "e1c76ca1eeb8497091405e8838166d4c" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 41067 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20260430 07:54:43.583076  3254 mvcc.cc:204] Tried to move back new op lower bound from 7280786159871111168 to 7280786159352524800. Current Snapshot: MvccSnapshot[applied={T|T < 7280786159871111168 or (T in {7280786159871111168})}]
W20260430 07:54:57.318856  3220 outbound_call.cc:321] RPC callback for RPC call kudu.tserver.TabletServerService.Write -> {remote=127.0.105.1:42857, user_credentials={real_user=slave}} blocked reactor thread for 74702.6us
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/integration-tests/tablet_copy-itest.cc:2164: Failure
Failed
Bad status: Timed out: Timed out waiting for number of WAL segments on tablet f2ae0bbdb8b745c78286580261322e45 on TS 0 to be 6. Found 5
I20260430 07:55:13.529330   420 external_mini_cluster-itest-base.cc:80] Found fatal failure
I20260430 07:55:13.530337   420 external_mini_cluster-itest-base.cc:86] Attempting to dump stacks of TS 0 with UUID 9a1d89563949413caa9bd86e37b2dcfc and pid 2820
************************ BEGIN STACKS **************************
[New LWP 2821]
[New LWP 2822]
[New LWP 2823]
[New LWP 2824]
[New LWP 2830]
[New LWP 2831]
[New LWP 2832]
[New LWP 2835]
[New LWP 2836]
[New LWP 2837]
[New LWP 2838]
[New LWP 2839]
[New LWP 2840]
[New LWP 2841]
[New LWP 2842]
[New LWP 2843]
[New LWP 2844]
[New LWP 2845]
[New LWP 2846]
[New LWP 2847]
[New LWP 2848]
[New LWP 2849]
[New LWP 2850]
[New LWP 2851]
[New LWP 2852]
[New LWP 2853]
[New LWP 2854]
[New LWP 2855]
[New LWP 2856]
[New LWP 2857]
[New LWP 2858]
[New LWP 2859]
[New LWP 2860]
[New LWP 2861]
[New LWP 2862]
[New LWP 2863]
[New LWP 2864]
[New LWP 2865]
[New LWP 2866]
[New LWP 2867]
[New LWP 2868]
[New LWP 2869]
[New LWP 2870]
[New LWP 2871]
[New LWP 2872]
[New LWP 2873]
[New LWP 2874]
[New LWP 2875]
[New LWP 2876]
[New LWP 2877]
[New LWP 2878]
[New LWP 2879]
[New LWP 2880]
[New LWP 2881]
[New LWP 2882]
[New LWP 2883]
[New LWP 2884]
[New LWP 2885]
[New LWP 2886]
[New LWP 2887]
[New LWP 2888]
[New LWP 2889]
[New LWP 2890]
[New LWP 2891]
[New LWP 2892]
[New LWP 2893]
[New LWP 2894]
[New LWP 2895]
[New LWP 2896]
[New LWP 2897]
[New LWP 2898]
[New LWP 2899]
[New LWP 2900]
[New LWP 2901]
[New LWP 2902]
[New LWP 2903]
[New LWP 2904]
[New LWP 2905]
[New LWP 2906]
[New LWP 2907]
[New LWP 2908]
[New LWP 2909]
[New LWP 2910]
[New LWP 2911]
[New LWP 2912]
[New LWP 2913]
[New LWP 2914]
[New LWP 2915]
[New LWP 2916]
[New LWP 2917]
[New LWP 2918]
[New LWP 2919]
[New LWP 2920]
[New LWP 2921]
[New LWP 2922]
[New LWP 2923]
[New LWP 2924]
[New LWP 2925]
[New LWP 2926]
[New LWP 2927]
[New LWP 2928]
[New LWP 2929]
[New LWP 2930]
[New LWP 2931]
[New LWP 2932]
[New LWP 2933]
[New LWP 2934]
[New LWP 2935]
[New LWP 2936]
[New LWP 2937]
[New LWP 2938]
[New LWP 2939]
[New LWP 2940]
[New LWP 2941]
[New LWP 2942]
[New LWP 2943]
[New LWP 2944]
[New LWP 2945]
[New LWP 2946]
[New LWP 2947]
[New LWP 2948]
[New LWP 3256]
[New LWP 3341]
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
0x00007efe482a9d50 in ?? ()
  Id   Target Id         Frame 
* 1    LWP 2820 "kudu"   0x00007efe482a9d50 in ?? ()
  2    LWP 2821 "kudu"   0x00007efe482a5fb9 in ?? ()
  3    LWP 2822 "kudu"   0x00007efe482a5fb9 in ?? ()
  4    LWP 2823 "kudu"   0x00007efe482a5fb9 in ?? ()
  5    LWP 2824 "kernel-watcher-" 0x00007efe482a5fb9 in ?? ()
  6    LWP 2830 "ntp client-2830" 0x00007efe482a99e2 in ?? ()
  7    LWP 2831 "file cache-evic" 0x00007efe482a5fb9 in ?? ()
  8    LWP 2832 "sq_acceptor" 0x00007efe40ddcbb9 in ?? ()
  9    LWP 2835 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  10   LWP 2836 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  11   LWP 2837 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  12   LWP 2838 "rpc reactor-283" 0x00007efe40de9947 in ?? ()
  13   LWP 2839 "MaintenanceMgr " 0x00007efe482a5ad3 in ?? ()
  14   LWP 2840 "txn-status-mana" 0x00007efe482a5fb9 in ?? ()
  15   LWP 2841 "collect_and_rem" 0x00007efe482a5fb9 in ?? ()
  16   LWP 2842 "tc-session-exp-" 0x00007efe482a5fb9 in ?? ()
  17   LWP 2843 "rpc worker-2843" 0x00007efe482a5ad3 in ?? ()
  18   LWP 2844 "rpc worker-2844" 0x00007efe482a5ad3 in ?? ()
  19   LWP 2845 "rpc worker-2845" 0x00007efe482a5ad3 in ?? ()
  20   LWP 2846 "rpc worker-2846" 0x00007efe482a5ad3 in ?? ()
  21   LWP 2847 "rpc worker-2847" 0x00007efe482a5ad3 in ?? ()
  22   LWP 2848 "rpc worker-2848" 0x00007efe482a5ad3 in ?? ()
  23   LWP 2849 "rpc worker-2849" 0x00007efe482a5ad3 in ?? ()
  24   LWP 2850 "rpc worker-2850" 0x00007efe482a5ad3 in ?? ()
  25   LWP 2851 "rpc worker-2851" 0x00007efe482a5ad3 in ?? ()
  26   LWP 2852 "rpc worker-2852" 0x00007efe482a5ad3 in ?? ()
  27   LWP 2853 "rpc worker-2853" 0x00007efe482a5ad3 in ?? ()
  28   LWP 2854 "rpc worker-2854" 0x00007efe482a5ad3 in ?? ()
  29   LWP 2855 "rpc worker-2855" 0x00007efe482a5ad3 in ?? ()
  30   LWP 2856 "rpc worker-2856" 0x00007efe482a5ad3 in ?? ()
  31   LWP 2857 "rpc worker-2857" 0x00007efe482a5ad3 in ?? ()
  32   LWP 2858 "rpc worker-2858" 0x00007efe482a5ad3 in ?? ()
  33   LWP 2859 "rpc worker-2859" 0x00007efe482a5ad3 in ?? ()
  34   LWP 2860 "rpc worker-2860" 0x00007efe482a5ad3 in ?? ()
  35   LWP 2861 "rpc worker-2861" 0x00007efe482a5ad3 in ?? ()
  36   LWP 2862 "rpc worker-2862" 0x00007efe482a5ad3 in ?? ()
  37   LWP 2863 "rpc worker-2863" 0x00007efe482a5ad3 in ?? ()
  38   LWP 2864 "rpc worker-2864" 0x00007efe482a5ad3 in ?? ()
  39   LWP 2865 "rpc worker-2865" 0x00007efe482a5ad3 in ?? ()
  40   LWP 2866 "rpc worker-2866" 0x00007efe482a5ad3 in ?? ()
  41   LWP 2867 "rpc worker-2867" 0x00007efe482a5ad3 in ?? ()
  42   LWP 2868 "rpc worker-2868" 0x00007efe482a5ad3 in ?? ()
  43   LWP 2869 "rpc worker-2869" 0x00007efe482a5ad3 in ?? ()
  44   LWP 2870 "rpc worker-2870" 0x00007efe482a5ad3 in ?? ()
  45   LWP 2871 "rpc worker-2871" 0x00007efe482a5ad3 in ?? ()
  46   LWP 2872 "rpc worker-2872" 0x00007efe482a5ad3 in ?? ()
  47   LWP 2873 "rpc worker-2873" 0x00007efe482a5ad3 in ?? ()
  48   LWP 2874 "rpc worker-2874" 0x00007efe482a5ad3 in ?? ()
  49   LWP 2875 "rpc worker-2875" 0x00007efe482a5ad3 in ?? ()
  50   LWP 2876 "rpc worker-2876" 0x00007efe482a5ad3 in ?? ()
  51   LWP 2877 "rpc worker-2877" 0x00007efe482a5ad3 in ?? ()
  52   LWP 2878 "rpc worker-2878" 0x00007efe482a5ad3 in ?? ()
  53   LWP 2879 "rpc worker-2879" 0x00007efe482a5ad3 in ?? ()
  54   LWP 2880 "rpc worker-2880" 0x00007efe482a5ad3 in ?? ()
  55   LWP 2881 "rpc worker-2881" 0x00007efe482a5ad3 in ?? ()
  56   LWP 2882 "rpc worker-2882" 0x00007efe482a5ad3 in ?? ()
  57   LWP 2883 "rpc worker-2883" 0x00007efe482a5ad3 in ?? ()
  58   LWP 2884 "rpc worker-2884" 0x00007efe482a5ad3 in ?? ()
  59   LWP 2885 "rpc worker-2885" 0x00007efe482a5ad3 in ?? ()
  60   LWP 2886 "rpc worker-2886" 0x00007efe482a5ad3 in ?? ()
  61   LWP 2887 "rpc worker-2887" 0x00007efe482a5ad3 in ?? ()
  62   LWP 2888 "rpc worker-2888" 0x00007efe482a5ad3 in ?? ()
  63   LWP 2889 "rpc worker-2889" 0x00007efe482a5ad3 in ?? ()
  64   LWP 2890 "rpc worker-2890" 0x00007efe482a5ad3 in ?? ()
  65   LWP 2891 "rpc worker-2891" 0x00007efe482a5ad3 in ?? ()
  66   LWP 2892 "rpc worker-2892" 0x00007efe482a5ad3 in ?? ()
  67   LWP 2893 "rpc worker-2893" 0x00007efe482a5ad3 in ?? ()
  68   LWP 2894 "rpc worker-2894" 0x00007efe482a5ad3 in ?? ()
  69   LWP 2895 "rpc worker-2895" 0x00007efe482a5ad3 in ?? ()
  70   LWP 2896 "rpc worker-2896" 0x00007efe482a5ad3 in ?? ()
  71   LWP 2897 "rpc worker-2897" 0x00007efe482a5ad3 in ?? ()
  72   LWP 2898 "rpc worker-2898" 0x00007efe482a5ad3 in ?? ()
  73   LWP 2899 "rpc worker-2899" 0x00007efe482a5ad3 in ?? ()
  74   LWP 2900 "rpc worker-2900" 0x00007efe482a5ad3 in ?? ()
  75   LWP 2901 "rpc worker-2901" 0x00007efe482a5ad3 in ?? ()
  76   LWP 2902 "rpc worker-2902" 0x00007efe482a5ad3 in ?? ()
  77   LWP 2903 "rpc worker-2903" 0x00007efe482a5ad3 in ?? ()
  78   LWP 2904 "rpc worker-2904" 0x00007efe482a5ad3 in ?? ()
  79   LWP 2905 "rpc worker-2905" 0x00007efe482a5ad3 in ?? ()
  80   LWP 2906 "rpc worker-2906" 0x00007efe482a5ad3 in ?? ()
  81   LWP 2907 "rpc worker-2907" 0x00007efe482a5ad3 in ?? ()
  82   LWP 2908 "rpc worker-2908" 0x00007efe482a5ad3 in ?? ()
  83   LWP 2909 "rpc worker-2909" 0x00007efe482a5ad3 in ?? ()
  84   LWP 2910 "rpc worker-2910" 0x00007efe482a5ad3 in ?? ()
  85   LWP 2911 "rpc worker-2911" 0x00007efe482a5ad3 in ?? ()
  86   LWP 2912 "rpc worker-2912" 0x00007efe482a5ad3 in ?? ()
  87   LWP 2913 "rpc worker-2913" 0x00007efe482a5ad3 in ?? ()
  88   LWP 2914 "rpc worker-2914" 0x00007efe482a5ad3 in ?? ()
  89   LWP 2915 "rpc worker-2915" 0x00007efe482a5ad3 in ?? ()
  90   LWP 2916 "rpc worker-2916" 0x00007efe482a5ad3 in ?? ()
  91   LWP 2917 "rpc worker-2917" 0x00007efe482a5ad3 in ?? ()
  92   LWP 2918 "rpc worker-2918" 0x00007efe482a5ad3 in ?? ()
  93   LWP 2919 "rpc worker-2919" 0x00007efe482a5ad3 in ?? ()
  94   LWP 2920 "rpc worker-2920" 0x00007efe482a5ad3 in ?? ()
  95   LWP 2921 "rpc worker-2921" 0x00007efe482a5ad3 in ?? ()
  96   LWP 2922 "rpc worker-2922" 0x00007efe482a5ad3 in ?? ()
  97   LWP 2923 "rpc worker-2923" 0x00007efe482a5ad3 in ?? ()
  98   LWP 2924 "rpc worker-2924" 0x00007efe482a5ad3 in ?? ()
  99   LWP 2925 "rpc worker-2925" 0x00007efe482a5ad3 in ?? ()
  100  LWP 2926 "rpc worker-2926" 0x00007efe482a5ad3 in ?? ()
  101  LWP 2927 "rpc worker-2927" 0x00007efe482a5ad3 in ?? ()
  102  LWP 2928 "rpc worker-2928" 0x00007efe482a5ad3 in ?? ()
  103  LWP 2929 "rpc worker-2929" 0x00007efe482a5ad3 in ?? ()
  104  LWP 2930 "rpc worker-2930" 0x00007efe482a5ad3 in ?? ()
  105  LWP 2931 "rpc worker-2931" 0x00007efe482a5ad3 in ?? ()
  106  LWP 2932 "rpc worker-2932" 0x00007efe482a5ad3 in ?? ()
  107  LWP 2933 "rpc worker-2933" 0x00007efe482a5ad3 in ?? ()
  108  LWP 2934 "rpc worker-2934" 0x00007efe482a5ad3 in ?? ()
  109  LWP 2935 "rpc worker-2935" 0x00007efe482a5ad3 in ?? ()
  110  LWP 2936 "rpc worker-2936" 0x00007efe482a5ad3 in ?? ()
  111  LWP 2937 "rpc worker-2937" 0x00007efe482a5ad3 in ?? ()
  112  LWP 2938 "rpc worker-2938" 0x00007efe482a5ad3 in ?? ()
  113  LWP 2939 "rpc worker-2939" 0x00007efe482a5ad3 in ?? ()
  114  LWP 2940 "rpc worker-2940" 0x00007efe482a5ad3 in ?? ()
  115  LWP 2941 "rpc worker-2941" 0x00007efe482a5ad3 in ?? ()
  116  LWP 2942 "rpc worker-2942" 0x00007efe482a5ad3 in ?? ()
  117  LWP 2943 "diag-logger-294" 0x00007efe482a5fb9 in ?? ()
  118  LWP 2944 "result-tracker-" 0x00007efe482a5fb9 in ?? ()
  119  LWP 2945 "excess-log-dele" 0x00007efe482a5fb9 in ?? ()
  120  LWP 2946 "acceptor-2946" 0x00007efe40deafc7 in ?? ()
  121  LWP 2947 "heartbeat-2947" 0x00007efe482a5fb9 in ?? ()
  122  LWP 2948 "maintenance_sch" 0x00007efe482a5fb9 in ?? ()
  123  LWP 3256 "wal-append [wor" 0x00007efe482a5fb9 in ?? ()
  124  LWP 3341 "raft [worker]-3" 0x00007efe482a5fb9 in ?? ()

Thread 124 (LWP 3341):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000045e0360e in ?? ()
#2  0x00000000000002a6 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x00007efe39db7bc0 in ?? ()
#5  0x00007efe39db7850 in ?? ()
#6  0x000000000000054c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 123 (LWP 3256):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00006020000b92f8 in ?? ()
#2  0x00000000000012c4 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061a00004ffb0 in ?? ()
#5  0x00007efdfaf8b130 in ?? ()
#6  0x0000000000002588 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 122 (LWP 2948):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efdfd0c9700 in ?? ()
#2  0x0000000000000085 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000616000016ca0 in ?? ()
#5  0x00007efdfd0c9750 in ?? ()
#6  0x000000000000010a in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 121 (LWP 2947):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x4b5301aec691978b in ?? ()
#2  0x0000000000000023 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061300001d840 in ?? ()
#5  0x00007efdfd8e1610 in ?? ()
#6  0x0000000000000046 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 120 (LWP 2946):
#0  0x00007efe40deafc7 in ?? ()
#1  0x00006140000224a8 in ?? ()
#2  0x00007efdfe108d70 in ?? ()
#3  0x00007efdfe108da0 in ?? ()
#4  0x00007efdfe108ea0 in ?? ()
#5  0x00007efdfe108d90 in ?? ()
#6  0x00007efdfe108e00 in ?? ()
#7  0x0000000000000080 in ?? ()
#8  0x00000000008d957b in __sanitizer::theDepot ()
#9  0x0000000500000014 in ?? ()
#10 0x00007efdfe108f20 in ?? ()
#11 0x00007efdfe10865c in ?? ()
#12 0x00000032fe1085d0 in ?? ()
#13 0x00007efdfd90c000 in ?? ()
#14 0x0000000000000000 in ?? ()

Thread 119 (LWP 2945):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efdfe922f60 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 118 (LWP 2944):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efdff13b120 in ?? ()
#2  0x0000000000000021 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061100008caf0 in ?? ()
#5  0x00007efdff13b110 in ?? ()
#6  0x0000000000000042 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 117 (LWP 2943):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 116 (LWP 2942):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 115 (LWP 2941):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 114 (LWP 2940):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 113 (LWP 2939):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 112 (LWP 2938):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 111 (LWP 2937):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 110 (LWP 2936):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 109 (LWP 2935):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 108 (LWP 2934):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 107 (LWP 2933):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 106 (LWP 2932):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 105 (LWP 2931):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 104 (LWP 2930):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 103 (LWP 2929):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 102 (LWP 2928):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 101 (LWP 2927):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 100 (LWP 2926):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 99 (LWP 2925):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 98 (LWP 2924):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 97 (LWP 2923):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 96 (LWP 2922):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 95 (LWP 2921):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 94 (LWP 2920):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 93 (LWP 2919):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 92 (LWP 2918):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 91 (LWP 2917):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 90 (LWP 2916):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 89 (LWP 2915):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 88 (LWP 2914):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 87 (LWP 2913):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 86 (LWP 2912):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 85 (LWP 2911):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 84 (LWP 2910):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 83 (LWP 2909):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 82 (LWP 2908):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 81 (LWP 2907):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 80 (LWP 2906):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 79 (LWP 2905):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 78 (LWP 2904):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 77 (LWP 2903):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 76 (LWP 2902):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000006 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0001a6888 in ?? ()
#4  0x00007efe1453beb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe1453bed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 75 (LWP 2901):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 74 (LWP 2900):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 73 (LWP 2899):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 72 (LWP 2898):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 71 (LWP 2897):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 70 (LWP 2896):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 69 (LWP 2895):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 68 (LWP 2894):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 67 (LWP 2893):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 66 (LWP 2892):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 65 (LWP 2891):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 64 (LWP 2890):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 63 (LWP 2889):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 62 (LWP 2888):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 61 (LWP 2887):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 60 (LWP 2886):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 59 (LWP 2885):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 58 (LWP 2884):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 57 (LWP 2883):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 56 (LWP 2882):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00012003c in ?? ()
#4  0x00007efe1e71ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe1e71ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00011fff0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe1e71ced0 in ?? ()
#11 0x00007efe1e71ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 55 (LWP 2881):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 54 (LWP 2880):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 53 (LWP 2879):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 52 (LWP 2878):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 51 (LWP 2877):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 50 (LWP 2876):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 49 (LWP 2875):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 48 (LWP 2874):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 47 (LWP 2873):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 46 (LWP 2872):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 45 (LWP 2871):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 44 (LWP 2870):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 43 (LWP 2869):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 42 (LWP 2868):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 41 (LWP 2867):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 40 (LWP 2866):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 39 (LWP 2865):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 38 (LWP 2864):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 37 (LWP 2863):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 36 (LWP 2862):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000003 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00009ffec in ?? ()
#4  0x00007efe288fdeb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe288fded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00009ffa0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe288fded0 in ?? ()
#11 0x00007efe288fde90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 35 (LWP 2861):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0000967fc in ?? ()
#4  0x00007efe29115eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe29115ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000967b0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe29115ed0 in ?? ()
#11 0x00007efe29115e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 34 (LWP 2860):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x00000000000001b6 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d00008fff8 in ?? ()
#4  0x00007efe2992deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2992ded0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 33 (LWP 2859):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d00008d008 in ?? ()
#4  0x00007efe2a145eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2a145ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 32 (LWP 2858):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008680c in ?? ()
#4  0x00007efe2a95deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2a95ded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000867c0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe2a95ded0 in ?? ()
#11 0x00007efe2a95de90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 31 (LWP 2857):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000211 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008000c in ?? ()
#4  0x00007efe2b175eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2b175ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00007ffc0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe2b175ed0 in ?? ()
#11 0x00007efe2b175e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 30 (LWP 2856):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00007681c in ?? ()
#4  0x00007efe2b98deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2b98ded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000767d0 in ?? ()
#9  0x00007efe482a5770 in ?? ()
#10 0x00007efe2b98ded0 in ?? ()
#11 0x00007efe2b98de90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 29 (LWP 2855):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x00000000000005d4 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d000070018 in ?? ()
#4  0x00007efe2c1a5eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007efe2c1a5ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 28 (LWP 2854):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 27 (LWP 2853):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 26 (LWP 2852):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 25 (LWP 2851):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 24 (LWP 2850):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 23 (LWP 2849):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 22 (LWP 2848):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 21 (LWP 2847):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 20 (LWP 2846):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 19 (LWP 2845):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 18 (LWP 2844):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 17 (LWP 2843):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 16 (LWP 2842):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe32b01ce0 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000613000020060 in ?? ()
#5  0x00007efe32b01cd0 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 15 (LWP 2841):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x4008000000000000 in ?? ()
#2  0x0000000000000006 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200001fe98 in ?? ()
#5  0x00007efe33329270 in ?? ()
#6  0x000000000000000c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 14 (LWP 2840):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe33b3f260 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061800000c9a8 in ?? ()
#5  0x00007efe33b3f250 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 13 (LWP 2839):
#0  0x00007efe482a5ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 12 (LWP 2838):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe34b8e340 in ?? ()
#2  0x000061a00000c680 in ?? ()
#3  0x00007efe34b8e330 in ?? ()
#4  0x00007efe34b8e540 in ?? ()
#5  0x00007efe34b8e380 in ?? ()
#6  0x0000614000022698 in ?? ()
#7  0x00007efe34b8e400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3fb979842e088000 in ?? ()
#10 0x000061a00000c680 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c680 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000fe046969c80 in ?? ()
#17 0x00007efe34b8e3e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x00007efe34b8e3b0 in ?? ()
#20 0x3fb979842e088000 in ?? ()
#21 0x0000000034b8e400 in ?? ()
#22 0x000061a00000c680 in ?? ()
#23 0x0000614000022698 in ?? ()
#24 0x3fb979842e088000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 11 (LWP 2837):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe353a5340 in ?? ()
#2  0x000061a00000c080 in ?? ()
#3  0x00007efe353a5330 in ?? ()
#4  0x00007efe353a5540 in ?? ()
#5  0x00007efe353a5380 in ?? ()
#6  0x0000614000022498 in ?? ()
#7  0x00007efe353a5400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3fb96a1e49fe0000 in ?? ()
#10 0x000061a00000c080 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c080 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff283 in ?? ()
#16 0x00000fe046a6ca80 in ?? ()
#17 0x00007efe353a53e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x00007efe353a53b0 in ?? ()
#20 0x3fb96a1e49fe0000 in ?? ()
#21 0x00000000353a5400 in ?? ()
#22 0x000061a00000c080 in ?? ()
#23 0x0000614000022498 in ?? ()
#24 0x3fb96a1e49fe0000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 10 (LWP 2836):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe35bb4340 in ?? ()
#2  0x000061a00000ba80 in ?? ()
#3  0x00007efe35bb4330 in ?? ()
#4  0x00007efe35bb4540 in ?? ()
#5  0x00007efe35bb4380 in ?? ()
#6  0x0000614000022298 in ?? ()
#7  0x00007efe35bb4400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3f91346fc0c64000 in ?? ()
#10 0x000061a00000ba80 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000ba80 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff288 in ?? ()
#16 0x00000fe046b6e880 in ?? ()
#17 0x00007efe35bb43e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x00007efe35bb43b0 in ?? ()
#20 0x3f91346fc0c64000 in ?? ()
#21 0x0000000035bb4400 in ?? ()
#22 0x000061a00000ba80 in ?? ()
#23 0x0000614000022298 in ?? ()
#24 0x3f91346fc0c64000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 9 (LWP 2835):
#0  0x00007efe40de9947 in ?? ()
#1  0x00007efe37bad340 in ?? ()
#2  0x000061a00000b480 in ?? ()
#3  0x00007efe37bad330 in ?? ()
#4  0x00007efe37bad540 in ?? ()
#5  0x00007efe37bad380 in ?? ()
#6  0x0000614000022098 in ?? ()
#7  0x00007efe37bad400 in ?? ()
#8  0x00007efe4345e25d in ?? ()
#9  0x3fb95f3305323000 in ?? ()
#10 0x000061a00000b480 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000b480 in ?? ()
#13 0x000000004bbf23d0 in ?? ()
#14 0x00007efe00000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000fe046f6da80 in ?? ()
#17 0x00007efe37bad3e0 in ?? ()
#18 0x00007efe43462ba3 in ?? ()
#19 0x0000000000000000 in ?? ()

Thread 8 (LWP 2832):
#0  0x00007efe40ddcbb9 in ?? ()
#1  0x00000000000000c8 in ?? ()
#2  0x00007efe395b77b8 in ?? ()
#3  0x000060200001e750 in ?? ()
#4  0x0000000000000002 in ?? ()
#5  0x00000000000000c8 in ?? ()
#6  0x00000000008d11c1 in __sanitizer::theDepot ()
#7  0x0000000000000000 in ?? ()

Thread 7 (LWP 2831):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe38db60e0 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 6 (LWP 2830):
#0  0x00007efe482a99e2 in ?? ()
#1  0x00007efe385b3bc0 in ?? ()
#2  0x00007efe385b3c60 in ?? ()
#3  0x000000000000001e in ?? ()
#4  0x0000000000000030 in ?? ()
#5  0x00007efe385b3c10 in ?? ()
#6  0x00000000017d0860 in ?? ()
#7  0x00007efe385b3c70 in ?? ()
#8  0x000061100008d450 in ?? ()
#9  0x00007efe385b3c60 in ?? ()
#10 0x00000000008cb6b7 in __sanitizer::theDepot ()
#11 0x00007efe4dc52bfc in ?? ()
#12 0x00007efe4dc42209 in ?? ()
#13 0x00007efe4dc467f6 in ?? ()
#14 0x00007efe4dc4b230 in ?? ()
#15 0x00007efe4dc4b059 in ?? ()
#16 0x0000000000aa4cad in __sanitizer::theDepot ()
#17 0x00007efe44cd1529 in ?? ()
#18 0x00007efe4829f6db in ?? ()
#19 0x00000fe0470ae688 in ?? ()
#20 0x00007efe385b3460 in ?? ()
#21 0x0000000000000000 in ?? ()

Thread 5 (LWP 2824):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x00007efe3a5b8ca0 in ?? ()
#2  0x00000000000000a7 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200000c728 in ?? ()
#5  0x00007efe3a5b8c90 in ?? ()
#6  0x000000000000014e in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 4 (LWP 2823):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 3 (LWP 2822):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 2 (LWP 2821):
#0  0x00007efe482a5fb9 in ?? ()
#1  0x5f5347414c46000a in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000612000012b38 in ?? ()
#5  0x00007efe3bdbc450 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 1 (LWP 2820):
#0  0x00007efe482a9d50 in ?? ()
#1  0x0000000000000000 in ?? ()
************************* END STACKS ***************************
I20260430 07:55:14.514652   420 external_mini_cluster-itest-base.cc:86] Attempting to dump stacks of TS 1 with UUID e1c76ca1eeb8497091405e8838166d4c and pid 2951
************************ BEGIN STACKS **************************
[New LWP 2952]
[New LWP 2953]
[New LWP 2954]
[New LWP 2955]
[New LWP 2961]
[New LWP 2962]
[New LWP 2963]
[New LWP 2966]
[New LWP 2967]
[New LWP 2968]
[New LWP 2969]
[New LWP 2970]
[New LWP 2971]
[New LWP 2972]
[New LWP 2973]
[New LWP 2974]
[New LWP 2975]
[New LWP 2976]
[New LWP 2977]
[New LWP 2978]
[New LWP 2979]
[New LWP 2980]
[New LWP 2981]
[New LWP 2982]
[New LWP 2983]
[New LWP 2984]
[New LWP 2985]
[New LWP 2986]
[New LWP 2987]
[New LWP 2988]
[New LWP 2989]
[New LWP 2990]
[New LWP 2991]
[New LWP 2992]
[New LWP 2993]
[New LWP 2994]
[New LWP 2995]
[New LWP 2996]
[New LWP 2997]
[New LWP 2998]
[New LWP 2999]
[New LWP 3000]
[New LWP 3001]
[New LWP 3002]
[New LWP 3003]
[New LWP 3004]
[New LWP 3005]
[New LWP 3006]
[New LWP 3007]
[New LWP 3008]
[New LWP 3009]
[New LWP 3010]
[New LWP 3011]
[New LWP 3012]
[New LWP 3013]
[New LWP 3014]
[New LWP 3015]
[New LWP 3016]
[New LWP 3017]
[New LWP 3018]
[New LWP 3019]
[New LWP 3020]
[New LWP 3021]
[New LWP 3022]
[New LWP 3023]
[New LWP 3024]
[New LWP 3025]
[New LWP 3026]
[New LWP 3027]
[New LWP 3028]
[New LWP 3029]
[New LWP 3030]
[New LWP 3031]
[New LWP 3032]
[New LWP 3033]
[New LWP 3034]
[New LWP 3035]
[New LWP 3036]
[New LWP 3037]
[New LWP 3038]
[New LWP 3039]
[New LWP 3040]
[New LWP 3041]
[New LWP 3042]
[New LWP 3043]
[New LWP 3044]
[New LWP 3045]
[New LWP 3046]
[New LWP 3047]
[New LWP 3048]
[New LWP 3049]
[New LWP 3050]
[New LWP 3051]
[New LWP 3052]
[New LWP 3053]
[New LWP 3054]
[New LWP 3055]
[New LWP 3056]
[New LWP 3057]
[New LWP 3058]
[New LWP 3059]
[New LWP 3060]
[New LWP 3061]
[New LWP 3062]
[New LWP 3063]
[New LWP 3064]
[New LWP 3065]
[New LWP 3066]
[New LWP 3067]
[New LWP 3068]
[New LWP 3069]
[New LWP 3070]
[New LWP 3071]
[New LWP 3072]
[New LWP 3073]
[New LWP 3074]
[New LWP 3075]
[New LWP 3076]
[New LWP 3077]
[New LWP 3078]
[New LWP 3079]
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
0x00007f2243909d50 in ?? ()
  Id   Target Id         Frame 
* 1    LWP 2951 "kudu"   0x00007f2243909d50 in ?? ()
  2    LWP 2952 "kudu"   0x00007f2243905fb9 in ?? ()
  3    LWP 2953 "kudu"   0x00007f2243905fb9 in ?? ()
  4    LWP 2954 "kudu"   0x00007f2243905fb9 in ?? ()
  5    LWP 2955 "kernel-watcher-" 0x00007f2243905fb9 in ?? ()
  6    LWP 2961 "ntp client-2961" 0x00007f22439099e2 in ?? ()
  7    LWP 2962 "file cache-evic" 0x00007f2243905fb9 in ?? ()
  8    LWP 2963 "sq_acceptor" 0x00007f223c43cbb9 in ?? ()
  9    LWP 2966 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  10   LWP 2967 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  11   LWP 2968 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  12   LWP 2969 "rpc reactor-296" 0x00007f223c449947 in ?? ()
  13   LWP 2970 "MaintenanceMgr " 0x00007f2243905ad3 in ?? ()
  14   LWP 2971 "txn-status-mana" 0x00007f2243905fb9 in ?? ()
  15   LWP 2972 "collect_and_rem" 0x00007f2243905fb9 in ?? ()
  16   LWP 2973 "tc-session-exp-" 0x00007f2243905fb9 in ?? ()
  17   LWP 2974 "rpc worker-2974" 0x00007f2243905ad3 in ?? ()
  18   LWP 2975 "rpc worker-2975" 0x00007f2243905ad3 in ?? ()
  19   LWP 2976 "rpc worker-2976" 0x00007f2243905ad3 in ?? ()
  20   LWP 2977 "rpc worker-2977" 0x00007f2243905ad3 in ?? ()
  21   LWP 2978 "rpc worker-2978" 0x00007f2243905ad3 in ?? ()
  22   LWP 2979 "rpc worker-2979" 0x00007f2243905ad3 in ?? ()
  23   LWP 2980 "rpc worker-2980" 0x00007f2243905ad3 in ?? ()
  24   LWP 2981 "rpc worker-2981" 0x00007f2243905ad3 in ?? ()
  25   LWP 2982 "rpc worker-2982" 0x00007f2243905ad3 in ?? ()
  26   LWP 2983 "rpc worker-2983" 0x00007f2243905ad3 in ?? ()
  27   LWP 2984 "rpc worker-2984" 0x00007f2243905ad3 in ?? ()
  28   LWP 2985 "rpc worker-2985" 0x00007f2243905ad3 in ?? ()
  29   LWP 2986 "rpc worker-2986" 0x00007f2243905ad3 in ?? ()
  30   LWP 2987 "rpc worker-2987" 0x00007f2243905ad3 in ?? ()
  31   LWP 2988 "rpc worker-2988" 0x00007f2243905ad3 in ?? ()
  32   LWP 2989 "rpc worker-2989" 0x00007f2243905ad3 in ?? ()
  33   LWP 2990 "rpc worker-2990" 0x00007f2243905ad3 in ?? ()
  34   LWP 2991 "rpc worker-2991" 0x00007f2243905ad3 in ?? ()
  35   LWP 2992 "rpc worker-2992" 0x00007f2243905ad3 in ?? ()
  36   LWP 2993 "rpc worker-2993" 0x00007f2243905ad3 in ?? ()
  37   LWP 2994 "rpc worker-2994" 0x00007f2243905ad3 in ?? ()
  38   LWP 2995 "rpc worker-2995" 0x00007f2243905ad3 in ?? ()
  39   LWP 2996 "rpc worker-2996" 0x00007f2243905ad3 in ?? ()
  40   LWP 2997 "rpc worker-2997" 0x00007f2243905ad3 in ?? ()
  41   LWP 2998 "rpc worker-2998" 0x00007f2243905ad3 in ?? ()
  42   LWP 2999 "rpc worker-2999" 0x00007f2243905ad3 in ?? ()
  43   LWP 3000 "rpc worker-3000" 0x00007f2243905ad3 in ?? ()
  44   LWP 3001 "rpc worker-3001" 0x00007f2243905ad3 in ?? ()
  45   LWP 3002 "rpc worker-3002" 0x00007f2243905ad3 in ?? ()
  46   LWP 3003 "rpc worker-3003" 0x00007f2243905ad3 in ?? ()
  47   LWP 3004 "rpc worker-3004" 0x00007f2243905ad3 in ?? ()
  48   LWP 3005 "rpc worker-3005" 0x00007f2243905ad3 in ?? ()
  49   LWP 3006 "rpc worker-3006" 0x00007f2243905ad3 in ?? ()
  50   LWP 3007 "rpc worker-3007" 0x00007f2243905ad3 in ?? ()
  51   LWP 3008 "rpc worker-3008" 0x00007f2243905ad3 in ?? ()
  52   LWP 3009 "rpc worker-3009" 0x00007f2243905ad3 in ?? ()
  53   LWP 3010 "rpc worker-3010" 0x00007f2243905ad3 in ?? ()
  54   LWP 3011 "rpc worker-3011" 0x00007f2243905ad3 in ?? ()
  55   LWP 3012 "rpc worker-3012" 0x00007f2243905ad3 in ?? ()
  56   LWP 3013 "rpc worker-3013" 0x00007f2243905ad3 in ?? ()
  57   LWP 3014 "rpc worker-3014" 0x00007f2243905ad3 in ?? ()
  58   LWP 3015 "rpc worker-3015" 0x00007f2243905ad3 in ?? ()
  59   LWP 3016 "rpc worker-3016" 0x00007f2243905ad3 in ?? ()
  60   LWP 3017 "rpc worker-3017" 0x00007f2243905ad3 in ?? ()
  61   LWP 3018 "rpc worker-3018" 0x00007f2243905ad3 in ?? ()
  62   LWP 3019 "rpc worker-3019" 0x00007f2243905ad3 in ?? ()
  63   LWP 3020 "rpc worker-3020" 0x00007f2243905ad3 in ?? ()
  64   LWP 3021 "rpc worker-3021" 0x00007f2243905ad3 in ?? ()
  65   LWP 3022 "rpc worker-3022" 0x00007f2243905ad3 in ?? ()
  66   LWP 3023 "rpc worker-3023" 0x00007f2243905ad3 in ?? ()
  67   LWP 3024 "rpc worker-3024" 0x00007f2243905ad3 in ?? ()
  68   LWP 3025 "rpc worker-3025" 0x00007f2243905ad3 in ?? ()
  69   LWP 3026 "rpc worker-3026" 0x00007f2243905ad3 in ?? ()
  70   LWP 3027 "rpc worker-3027" 0x00007f2243905ad3 in ?? ()
  71   LWP 3028 "rpc worker-3028" 0x00007f2243905ad3 in ?? ()
  72   LWP 3029 "rpc worker-3029" 0x00007f2243905ad3 in ?? ()
  73   LWP 3030 "rpc worker-3030" 0x00007f2243905ad3 in ?? ()
  74   LWP 3031 "rpc worker-3031" 0x00007f2243905ad3 in ?? ()
  75   LWP 3032 "rpc worker-3032" 0x00007f2243905ad3 in ?? ()
  76   LWP 3033 "rpc worker-3033" 0x00007f2243905ad3 in ?? ()
  77   LWP 3034 "rpc worker-3034" 0x00007f2243905ad3 in ?? ()
  78   LWP 3035 "rpc worker-3035" 0x00007f2243905ad3 in ?? ()
  79   LWP 3036 "rpc worker-3036" 0x00007f2243905ad3 in ?? ()
  80   LWP 3037 "rpc worker-3037" 0x00007f2243905ad3 in ?? ()
  81   LWP 3038 "rpc worker-3038" 0x00007f2243905ad3 in ?? ()
  82   LWP 3039 "rpc worker-3039" 0x00007f2243905ad3 in ?? ()
  83   LWP 3040 "rpc worker-3040" 0x00007f2243905ad3 in ?? ()
  84   LWP 3041 "rpc worker-3041" 0x00007f2243905ad3 in ?? ()
  85   LWP 3042 "rpc worker-3042" 0x00007f2243905ad3 in ?? ()
  86   LWP 3043 "rpc worker-3043" 0x00007f2243905ad3 in ?? ()
  87   LWP 3044 "rpc worker-3044" 0x00007f2243905ad3 in ?? ()
  88   LWP 3045 "rpc worker-3045" 0x00007f2243905ad3 in ?? ()
  89   LWP 3046 "rpc worker-3046" 0x00007f2243905ad3 in ?? ()
  90   LWP 3047 "rpc worker-3047" 0x00007f2243905ad3 in ?? ()
  91   LWP 3048 "rpc worker-3048" 0x00007f2243905ad3 in ?? ()
  92   LWP 3049 "rpc worker-3049" 0x00007f2243905ad3 in ?? ()
  93   LWP 3050 "rpc worker-3050" 0x00007f2243905ad3 in ?? ()
  94   LWP 3051 "rpc worker-3051" 0x00007f2243905ad3 in ?? ()
  95   LWP 3052 "rpc worker-3052" 0x00007f2243905ad3 in ?? ()
  96   LWP 3053 "rpc worker-3053" 0x00007f2243905ad3 in ?? ()
  97   LWP 3054 "rpc worker-3054" 0x00007f2243905ad3 in ?? ()
  98   LWP 3055 "rpc worker-3055" 0x00007f2243905ad3 in ?? ()
  99   LWP 3056 "rpc worker-3056" 0x00007f2243905ad3 in ?? ()
  100  LWP 3057 "rpc worker-3057" 0x00007f2243905ad3 in ?? ()
  101  LWP 3058 "rpc worker-3058" 0x00007f2243905ad3 in ?? ()
  102  LWP 3059 "rpc worker-3059" 0x00007f2243905ad3 in ?? ()
  103  LWP 3060 "rpc worker-3060" 0x00007f2243905ad3 in ?? ()
  104  LWP 3061 "rpc worker-3061" 0x00007f2243905ad3 in ?? ()
  105  LWP 3062 "rpc worker-3062" 0x00007f2243905ad3 in ?? ()
  106  LWP 3063 "rpc worker-3063" 0x00007f2243905ad3 in ?? ()
  107  LWP 3064 "rpc worker-3064" 0x00007f2243905ad3 in ?? ()
  108  LWP 3065 "rpc worker-3065" 0x00007f2243905ad3 in ?? ()
  109  LWP 3066 "rpc worker-3066" 0x00007f2243905ad3 in ?? ()
  110  LWP 3067 "rpc worker-3067" 0x00007f2243905ad3 in ?? ()
  111  LWP 3068 "rpc worker-3068" 0x00007f2243905ad3 in ?? ()
  112  LWP 3069 "rpc worker-3069" 0x00007f2243905ad3 in ?? ()
  113  LWP 3070 "rpc worker-3070" 0x00007f2243905ad3 in ?? ()
  114  LWP 3071 "rpc worker-3071" 0x00007f2243905ad3 in ?? ()
  115  LWP 3072 "rpc worker-3072" 0x00007f2243905ad3 in ?? ()
  116  LWP 3073 "rpc worker-3073" 0x00007f2243905ad3 in ?? ()
  117  LWP 3074 "diag-logger-307" 0x00007f2243905fb9 in ?? ()
  118  LWP 3075 "result-tracker-" 0x00007f2243905fb9 in ?? ()
  119  LWP 3076 "excess-log-dele" 0x00007f2243905fb9 in ?? ()
  120  LWP 3077 "acceptor-3077" 0x00007f223c44afc7 in ?? ()
  121  LWP 3078 "heartbeat-3078" 0x00007f2243905fb9 in ?? ()
  122  LWP 3079 "maintenance_sch" 0x00007f2243905fb9 in ?? ()

Thread 122 (LWP 3079):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f21f8729700 in ?? ()
#2  0x0000000000000086 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000616000016ca0 in ?? ()
#5  0x00007f21f8729750 in ?? ()
#6  0x000000000000010c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 121 (LWP 3078):
#0  0x00007f2243905fb9 in ?? ()
#1  0x4b5301aec691978b in ?? ()
#2  0x0000000000000025 in ?? ()
#3  0x0000000100000081 in ?? ()
#4  0x000061300001d844 in ?? ()
#5  0x00007f21f8f41610 in ?? ()
#6  0x000000000000004b in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x00007f21f8f41630 in ?? ()
#9  0x0000000200000000 in ?? ()
#10 0x0000008000000189 in ?? ()
#11 0x00007f21f8f416b0 in ?? ()
#12 0x00000fe43f1e82d8 in ?? ()
#13 0x0000000000000000 in ?? ()

Thread 120 (LWP 3077):
#0  0x00007f223c44afc7 in ?? ()
#1  0x00006140000224a8 in ?? ()
#2  0x00007f21f9768d70 in ?? ()
#3  0x00007f21f9768da0 in ?? ()
#4  0x00007f21f9768ea0 in ?? ()
#5  0x00007f21f9768d90 in ?? ()
#6  0x00007f21f9768e00 in ?? ()
#7  0x0000000000000080 in ?? ()
#8  0x00000000008d957b in __sanitizer::theDepot ()
#9  0x0000000500000014 in ?? ()
#10 0x00007f21f9768f20 in ?? ()
#11 0x00007f21f976865c in ?? ()
#12 0x00007f21f97685d0 in ?? ()
#13 0x00007f21f8f6c000 in ?? ()
#14 0x0000000000000000 in ?? ()

Thread 119 (LWP 3076):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f21f9f82f60 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 118 (LWP 3075):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f21fa79b120 in ?? ()
#2  0x0000000000000021 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061100008caf0 in ?? ()
#5  0x00007f21fa79b110 in ?? ()
#6  0x0000000000000042 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 117 (LWP 3074):
#0  0x00007f2243905fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 116 (LWP 3073):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 115 (LWP 3072):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 114 (LWP 3071):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 113 (LWP 3070):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 112 (LWP 3069):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 111 (LWP 3068):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 110 (LWP 3067):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 109 (LWP 3066):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 108 (LWP 3065):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 107 (LWP 3064):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 106 (LWP 3063):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 105 (LWP 3062):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 104 (LWP 3061):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 103 (LWP 3060):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 102 (LWP 3059):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 101 (LWP 3058):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 100 (LWP 3057):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 99 (LWP 3056):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 98 (LWP 3055):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 97 (LWP 3054):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 96 (LWP 3053):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 95 (LWP 3052):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 94 (LWP 3051):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 93 (LWP 3050):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 92 (LWP 3049):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 91 (LWP 3048):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 90 (LWP 3047):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 89 (LWP 3046):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 88 (LWP 3045):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 87 (LWP 3044):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 86 (LWP 3043):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 85 (LWP 3042):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 84 (LWP 3041):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 83 (LWP 3040):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 82 (LWP 3039):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 81 (LWP 3038):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 80 (LWP 3037):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 79 (LWP 3036):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 78 (LWP 3035):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 77 (LWP 3034):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 76 (LWP 3033):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000a57 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0001a688c in ?? ()
#4  0x00007f220fb9beb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f220fb9bed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0001a6840 in ?? ()
#9  0x00007f2243905770 in ?? ()
#10 0x00007f220fb9bed0 in ?? ()
#11 0x00007f220fb9be90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 75 (LWP 3032):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000776 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0001a0088 in ?? ()
#4  0x00007f22103b3eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f22103b3ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 74 (LWP 3031):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 73 (LWP 3030):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 72 (LWP 3029):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 71 (LWP 3028):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 70 (LWP 3027):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 69 (LWP 3026):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 68 (LWP 3025):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 67 (LWP 3024):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 66 (LWP 3023):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 65 (LWP 3022):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 64 (LWP 3021):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 63 (LWP 3020):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 62 (LWP 3019):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 61 (LWP 3018):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 60 (LWP 3017):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 59 (LWP 3016):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 58 (LWP 3015):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 57 (LWP 3014):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 56 (LWP 3013):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00012003c in ?? ()
#4  0x00007f2219d7ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f2219d7ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00011fff0 in ?? ()
#9  0x00007f2243905770 in ?? ()
#10 0x00007f2219d7ced0 in ?? ()
#11 0x00007f2219d7ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 55 (LWP 3012):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 54 (LWP 3011):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 53 (LWP 3010):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 52 (LWP 3009):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 51 (LWP 3008):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 50 (LWP 3007):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 49 (LWP 3006):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 48 (LWP 3005):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 47 (LWP 3004):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 46 (LWP 3003):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 45 (LWP 3002):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 44 (LWP 3001):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 43 (LWP 3000):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 42 (LWP 2999):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 41 (LWP 2998):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 40 (LWP 2997):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 39 (LWP 2996):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 38 (LWP 2995):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 37 (LWP 2994):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 36 (LWP 2993):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000003 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00009ffec in ?? ()
#4  0x00007f2223f5deb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f2223f5ded0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00009ffa0 in ?? ()
#9  0x00007f2243905770 in ?? ()
#10 0x00007f2223f5ded0 in ?? ()
#11 0x00007f2223f5de90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 35 (LWP 2992):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0000967f8 in ?? ()
#4  0x00007f2224775eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f2224775ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 34 (LWP 2991):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 33 (LWP 2990):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 32 (LWP 2989):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 31 (LWP 2988):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 30 (LWP 2987):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 29 (LWP 2986):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 28 (LWP 2985):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 27 (LWP 2984):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 26 (LWP 2983):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 25 (LWP 2982):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 24 (LWP 2981):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 23 (LWP 2980):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 22 (LWP 2979):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 21 (LWP 2978):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 20 (LWP 2977):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 19 (LWP 2976):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 18 (LWP 2975):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 17 (LWP 2974):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 16 (LWP 2973):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f222e0b8ce0 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000613000020060 in ?? ()
#5  0x00007f222e0b8cd0 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 15 (LWP 2972):
#0  0x00007f2243905fb9 in ?? ()
#1  0x4008000000000000 in ?? ()
#2  0x0000000000000006 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200001fe98 in ?? ()
#5  0x00007f222e8ba270 in ?? ()
#6  0x000000000000000c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 14 (LWP 2971):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f222f0ba260 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061800000c9a8 in ?? ()
#5  0x00007f222f0ba250 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 13 (LWP 2970):
#0  0x00007f2243905ad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 12 (LWP 2969):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f22300bd340 in ?? ()
#2  0x000061a00000c680 in ?? ()
#3  0x00007f22300bd330 in ?? ()
#4  0x00007f22300bd540 in ?? ()
#5  0x00007f22300bd380 in ?? ()
#6  0x0000614000022698 in ?? ()
#7  0x00007f22300bd400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb95a6a95e4b000 in ?? ()
#10 0x000061a00000c680 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c680 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff283 in ?? ()
#16 0x00000fe4c600fa80 in ?? ()
#17 0x00007f22300bd3e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x00007f22300bd3b0 in ?? ()
#20 0x3fb95a6a95e4b000 in ?? ()
#21 0x00000000300bd400 in ?? ()
#22 0x000061a00000c680 in ?? ()
#23 0x0000614000022698 in ?? ()
#24 0x3fb95a6a95e4b000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 11 (LWP 2968):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f22308be340 in ?? ()
#2  0x000061a00000c080 in ?? ()
#3  0x00007f22308be330 in ?? ()
#4  0x00007f22308be540 in ?? ()
#5  0x00007f22308be380 in ?? ()
#6  0x0000614000022498 in ?? ()
#7  0x00007f22308be400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb9586cdc327000 in ?? ()
#10 0x000061a00000c080 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c080 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000fe4c610fc80 in ?? ()
#17 0x00007f22308be3e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x00007f22308be3b0 in ?? ()
#20 0x3fb9586cdc327000 in ?? ()
#21 0x00000000308be400 in ?? ()
#22 0x000061a00000c080 in ?? ()
#23 0x0000614000022498 in ?? ()
#24 0x3fb9586cdc327000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 10 (LWP 2967):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f22310bf340 in ?? ()
#2  0x000061a00000ba80 in ?? ()
#3  0x00007f22310bf330 in ?? ()
#4  0x00007f22310bf540 in ?? ()
#5  0x00007f22310bf380 in ?? ()
#6  0x0000614000022298 in ?? ()
#7  0x00007f22310bf400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb96d3c83f25000 in ?? ()
#10 0x000061a00000ba80 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000ba80 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000fe4c620fe80 in ?? ()
#17 0x00007f22310bf3e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x00007f22310bf3b0 in ?? ()
#20 0x3fb96d3c83f25000 in ?? ()
#21 0x00000000310bf400 in ?? ()
#22 0x000061a00000ba80 in ?? ()
#23 0x0000614000022298 in ?? ()
#24 0x3fb96d3c83f25000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 9 (LWP 2966):
#0  0x00007f223c449947 in ?? ()
#1  0x00007f2232ca3340 in ?? ()
#2  0x000061a00000b480 in ?? ()
#3  0x00007f2232ca3330 in ?? ()
#4  0x00007f2232ca3540 in ?? ()
#5  0x00007f2232ca3380 in ?? ()
#6  0x0000614000022098 in ?? ()
#7  0x00007f2232ca3400 in ?? ()
#8  0x00007f223eabe25d in ?? ()
#9  0x3fb94abf45471000 in ?? ()
#10 0x000061a00000b480 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000b480 in ?? ()
#13 0x00000000472523d0 in ?? ()
#14 0x00007f2200000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000fe4c658c680 in ?? ()
#17 0x00007f2232ca33e0 in ?? ()
#18 0x00007f223eac2ba3 in ?? ()
#19 0x0000000000000000 in ?? ()

Thread 8 (LWP 2963):
#0  0x00007f223c43cbb9 in ?? ()
#1  0x00000000000000c8 in ?? ()
#2  0x00007f2234cb77b8 in ?? ()
#3  0x000060200001f510 in ?? ()
#4  0x0000000000000002 in ?? ()
#5  0x00000000000000c8 in ?? ()
#6  0x00000000008d11c1 in __sanitizer::theDepot ()
#7  0x0000000000000000 in ?? ()

Thread 7 (LWP 2962):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f22344b60e0 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 6 (LWP 2961):
#0  0x00007f22439099e2 in ?? ()
#1  0x00007f2233cb3bc0 in ?? ()
#2  0x00007f2233cb3c60 in ?? ()
#3  0x000000000000001e in ?? ()
#4  0x0000000000000030 in ?? ()
#5  0x00007f2233cb3c10 in ?? ()
#6  0x00000000017d0860 in ?? ()
#7  0x00007f2233cb3c70 in ?? ()
#8  0x000061100008d450 in ?? ()
#9  0x00007f2233cb3c60 in ?? ()
#10 0x00000000008cb6b7 in __sanitizer::theDepot ()
#11 0x00007f22492b2bfc in ?? ()
#12 0x00007f22492a2209 in ?? ()
#13 0x00007f22492a67f6 in ?? ()
#14 0x00007f22492ab230 in ?? ()
#15 0x00007f22492ab059 in ?? ()
#16 0x0000000000aa4cad in __sanitizer::theDepot ()
#17 0x00007f2240331529 in ?? ()
#18 0x00007f22438ff6db in ?? ()
#19 0x00000fe4c678e688 in ?? ()
#20 0x00007f2233cb3460 in ?? ()
#21 0x0000000000000000 in ?? ()

Thread 5 (LWP 2955):
#0  0x00007f2243905fb9 in ?? ()
#1  0x00007f2235cb8ca0 in ?? ()
#2  0x00000000000000a9 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200000c728 in ?? ()
#5  0x00007f2235cb8c90 in ?? ()
#6  0x0000000000000152 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 4 (LWP 2954):
#0  0x00007f2243905fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 3 (LWP 2953):
#0  0x00007f2243905fb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 2 (LWP 2952):
#0  0x00007f2243905fb9 in ?? ()
#1  0x5f5347414c46000a in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000612000012b38 in ?? ()
#5  0x00007f22374bc450 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 1 (LWP 2951):
#0  0x00007f2243909d50 in ?? ()
#1  0x0000000000000000 in ?? ()
************************* END STACKS ***************************
I20260430 07:55:15.584566   420 external_mini_cluster-itest-base.cc:86] Attempting to dump stacks of TS 2 with UUID f0a630514a8e403c89769e0367de2852 and pid 3082
************************ BEGIN STACKS **************************
[New LWP 3083]
[New LWP 3084]
[New LWP 3085]
[New LWP 3086]
[New LWP 3092]
[New LWP 3093]
[New LWP 3094]
[New LWP 3097]
[New LWP 3098]
[New LWP 3099]
[New LWP 3100]
[New LWP 3101]
[New LWP 3102]
[New LWP 3103]
[New LWP 3104]
[New LWP 3105]
[New LWP 3106]
[New LWP 3107]
[New LWP 3108]
[New LWP 3109]
[New LWP 3110]
[New LWP 3111]
[New LWP 3112]
[New LWP 3113]
[New LWP 3114]
[New LWP 3115]
[New LWP 3116]
[New LWP 3117]
[New LWP 3118]
[New LWP 3119]
[New LWP 3120]
[New LWP 3121]
[New LWP 3122]
[New LWP 3123]
[New LWP 3124]
[New LWP 3125]
[New LWP 3126]
[New LWP 3127]
[New LWP 3128]
[New LWP 3129]
[New LWP 3130]
[New LWP 3131]
[New LWP 3132]
[New LWP 3133]
[New LWP 3134]
[New LWP 3135]
[New LWP 3136]
[New LWP 3137]
[New LWP 3138]
[New LWP 3139]
[New LWP 3140]
[New LWP 3141]
[New LWP 3142]
[New LWP 3143]
[New LWP 3144]
[New LWP 3145]
[New LWP 3146]
[New LWP 3147]
[New LWP 3148]
[New LWP 3149]
[New LWP 3150]
[New LWP 3151]
[New LWP 3152]
[New LWP 3153]
[New LWP 3154]
[New LWP 3155]
[New LWP 3156]
[New LWP 3157]
[New LWP 3158]
[New LWP 3159]
[New LWP 3160]
[New LWP 3161]
[New LWP 3162]
[New LWP 3163]
[New LWP 3164]
[New LWP 3165]
[New LWP 3166]
[New LWP 3167]
[New LWP 3168]
[New LWP 3169]
[New LWP 3170]
[New LWP 3171]
[New LWP 3172]
[New LWP 3173]
[New LWP 3174]
[New LWP 3175]
[New LWP 3176]
[New LWP 3177]
[New LWP 3178]
[New LWP 3179]
[New LWP 3180]
[New LWP 3181]
[New LWP 3182]
[New LWP 3183]
[New LWP 3184]
[New LWP 3185]
[New LWP 3186]
[New LWP 3187]
[New LWP 3188]
[New LWP 3189]
[New LWP 3190]
[New LWP 3191]
[New LWP 3192]
[New LWP 3193]
[New LWP 3194]
[New LWP 3195]
[New LWP 3196]
[New LWP 3197]
[New LWP 3198]
[New LWP 3199]
[New LWP 3200]
[New LWP 3201]
[New LWP 3202]
[New LWP 3203]
[New LWP 3204]
[New LWP 3205]
[New LWP 3206]
[New LWP 3207]
[New LWP 3208]
[New LWP 3209]
[New LWP 3210]
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6969
Cannot access memory at address 0x763a3a72656e6961
0x00007f53c4e90d50 in ?? ()
  Id   Target Id         Frame 
* 1    LWP 3082 "kudu"   0x00007f53c4e90d50 in ?? ()
  2    LWP 3083 "kudu"   0x00007f53c4e8cfb9 in ?? ()
  3    LWP 3084 "kudu"   0x00007f53c4e8cfb9 in ?? ()
  4    LWP 3085 "kudu"   0x00007f53c4e8cfb9 in ?? ()
  5    LWP 3086 "kernel-watcher-" 0x00007f53c4e8cfb9 in ?? ()
  6    LWP 3092 "ntp client-3092" 0x00007f53c4e909e2 in ?? ()
  7    LWP 3093 "file cache-evic" 0x00007f53c4e8cfb9 in ?? ()
  8    LWP 3094 "sq_acceptor" 0x00007f53bd9c3bb9 in ?? ()
  9    LWP 3097 "rpc reactor-309" 0x00007f53bd9d0947 in ?? ()
  10   LWP 3098 "rpc reactor-309" 0x00007f53bd9d0947 in ?? ()
  11   LWP 3099 "rpc reactor-309" 0x00007f53bd9d0947 in ?? ()
  12   LWP 3100 "rpc reactor-310" 0x00007f53bd9d0947 in ?? ()
  13   LWP 3101 "MaintenanceMgr " 0x00007f53c4e8cad3 in ?? ()
  14   LWP 3102 "txn-status-mana" 0x00007f53c4e8cfb9 in ?? ()
  15   LWP 3103 "collect_and_rem" 0x00007f53c4e8cfb9 in ?? ()
  16   LWP 3104 "tc-session-exp-" 0x00007f53c4e8cfb9 in ?? ()
  17   LWP 3105 "rpc worker-3105" 0x00007f53c4e8cad3 in ?? ()
  18   LWP 3106 "rpc worker-3106" 0x00007f53c4e8cad3 in ?? ()
  19   LWP 3107 "rpc worker-3107" 0x00007f53c4e8cad3 in ?? ()
  20   LWP 3108 "rpc worker-3108" 0x00007f53c4e8cad3 in ?? ()
  21   LWP 3109 "rpc worker-3109" 0x00007f53c4e8cad3 in ?? ()
  22   LWP 3110 "rpc worker-3110" 0x00007f53c4e8cad3 in ?? ()
  23   LWP 3111 "rpc worker-3111" 0x00007f53c4e8cad3 in ?? ()
  24   LWP 3112 "rpc worker-3112" 0x00007f53c4e8cad3 in ?? ()
  25   LWP 3113 "rpc worker-3113" 0x00007f53c4e8cad3 in ?? ()
  26   LWP 3114 "rpc worker-3114" 0x00007f53c4e8cad3 in ?? ()
  27   LWP 3115 "rpc worker-3115" 0x00007f53c4e8cad3 in ?? ()
  28   LWP 3116 "rpc worker-3116" 0x00007f53c4e8cad3 in ?? ()
  29   LWP 3117 "rpc worker-3117" 0x00007f53c4e8cad3 in ?? ()
  30   LWP 3118 "rpc worker-3118" 0x00007f53c4e8cad3 in ?? ()
  31   LWP 3119 "rpc worker-3119" 0x00007f53c4e8cad3 in ?? ()
  32   LWP 3120 "rpc worker-3120" 0x00007f53c4e8cad3 in ?? ()
  33   LWP 3121 "rpc worker-3121" 0x00007f53c4e8cad3 in ?? ()
  34   LWP 3122 "rpc worker-3122" 0x00007f53c4e8cad3 in ?? ()
  35   LWP 3123 "rpc worker-3123" 0x00007f53c4e8cad3 in ?? ()
  36   LWP 3124 "rpc worker-3124" 0x00007f53c4e8cad3 in ?? ()
  37   LWP 3125 "rpc worker-3125" 0x00007f53c4e8cad3 in ?? ()
  38   LWP 3126 "rpc worker-3126" 0x00007f53c4e8cad3 in ?? ()
  39   LWP 3127 "rpc worker-3127" 0x00007f53c4e8cad3 in ?? ()
  40   LWP 3128 "rpc worker-3128" 0x00007f53c4e8cad3 in ?? ()
  41   LWP 3129 "rpc worker-3129" 0x00007f53c4e8cad3 in ?? ()
  42   LWP 3130 "rpc worker-3130" 0x00007f53c4e8cad3 in ?? ()
  43   LWP 3131 "rpc worker-3131" 0x00007f53c4e8cad3 in ?? ()
  44   LWP 3132 "rpc worker-3132" 0x00007f53c4e8cad3 in ?? ()
  45   LWP 3133 "rpc worker-3133" 0x00007f53c4e8cad3 in ?? ()
  46   LWP 3134 "rpc worker-3134" 0x00007f53c4e8cad3 in ?? ()
  47   LWP 3135 "rpc worker-3135" 0x00007f53c4e8cad3 in ?? ()
  48   LWP 3136 "rpc worker-3136" 0x00007f53c4e8cad3 in ?? ()
  49   LWP 3137 "rpc worker-3137" 0x00007f53c4e8cad3 in ?? ()
  50   LWP 3138 "rpc worker-3138" 0x00007f53c4e8cad3 in ?? ()
  51   LWP 3139 "rpc worker-3139" 0x00007f53c4e8cad3 in ?? ()
  52   LWP 3140 "rpc worker-3140" 0x00007f53c4e8cad3 in ?? ()
  53   LWP 3141 "rpc worker-3141" 0x00007f53c4e8cad3 in ?? ()
  54   LWP 3142 "rpc worker-3142" 0x00007f53c4e8cad3 in ?? ()
  55   LWP 3143 "rpc worker-3143" 0x00007f53c4e8cad3 in ?? ()
  56   LWP 3144 "rpc worker-3144" 0x00007f53c4e8cad3 in ?? ()
  57   LWP 3145 "rpc worker-3145" 0x00007f53c4e8cad3 in ?? ()
  58   LWP 3146 "rpc worker-3146" 0x00007f53c4e8cad3 in ?? ()
  59   LWP 3147 "rpc worker-3147" 0x00007f53c4e8cad3 in ?? ()
  60   LWP 3148 "rpc worker-3148" 0x00007f53c4e8cad3 in ?? ()
  61   LWP 3149 "rpc worker-3149" 0x00007f53c4e8cad3 in ?? ()
  62   LWP 3150 "rpc worker-3150" 0x00007f53c4e8cad3 in ?? ()
  63   LWP 3151 "rpc worker-3151" 0x00007f53c4e8cad3 in ?? ()
  64   LWP 3152 "rpc worker-3152" 0x00007f53c4e8cad3 in ?? ()
  65   LWP 3153 "rpc worker-3153" 0x00007f53c4e8cad3 in ?? ()
  66   LWP 3154 "rpc worker-3154" 0x00007f53c4e8cad3 in ?? ()
  67   LWP 3155 "rpc worker-3155" 0x00007f53c4e8cad3 in ?? ()
  68   LWP 3156 "rpc worker-3156" 0x00007f53c4e8cad3 in ?? ()
  69   LWP 3157 "rpc worker-3157" 0x00007f53c4e8cad3 in ?? ()
  70   LWP 3158 "rpc worker-3158" 0x00007f53c4e8cad3 in ?? ()
  71   LWP 3159 "rpc worker-3159" 0x00007f53c4e8cad3 in ?? ()
  72   LWP 3160 "rpc worker-3160" 0x00007f53c4e8cad3 in ?? ()
  73   LWP 3161 "rpc worker-3161" 0x00007f53c4e8cad3 in ?? ()
  74   LWP 3162 "rpc worker-3162" 0x00007f53c4e8cad3 in ?? ()
  75   LWP 3163 "rpc worker-3163" 0x00007f53c4e8cad3 in ?? ()
  76   LWP 3164 "rpc worker-3164" 0x00007f53c4e8cad3 in ?? ()
  77   LWP 3165 "rpc worker-3165" 0x00007f53c4e8cad3 in ?? ()
  78   LWP 3166 "rpc worker-3166" 0x00007f53c4e8cad3 in ?? ()
  79   LWP 3167 "rpc worker-3167" 0x00007f53c4e8cad3 in ?? ()
  80   LWP 3168 "rpc worker-3168" 0x00007f53c4e8cad3 in ?? ()
  81   LWP 3169 "rpc worker-3169" 0x00007f53c4e8cad3 in ?? ()
  82   LWP 3170 "rpc worker-3170" 0x00007f53c4e8cad3 in ?? ()
  83   LWP 3171 "rpc worker-3171" 0x00007f53c4e8cad3 in ?? ()
  84   LWP 3172 "rpc worker-3172" 0x00007f53c4e8cad3 in ?? ()
  85   LWP 3173 "rpc worker-3173" 0x00007f53c4e8cad3 in ?? ()
  86   LWP 3174 "rpc worker-3174" 0x00007f53c4e8cad3 in ?? ()
  87   LWP 3175 "rpc worker-3175" 0x00007f53c4e8cad3 in ?? ()
  88   LWP 3176 "rpc worker-3176" 0x00007f53c4e8cad3 in ?? ()
  89   LWP 3177 "rpc worker-3177" 0x00007f53c4e8cad3 in ?? ()
  90   LWP 3178 "rpc worker-3178" 0x00007f53c4e8cad3 in ?? ()
  91   LWP 3179 "rpc worker-3179" 0x00007f53c4e8cad3 in ?? ()
  92   LWP 3180 "rpc worker-3180" 0x00007f53c4e8cad3 in ?? ()
  93   LWP 3181 "rpc worker-3181" 0x00007f53c4e8cad3 in ?? ()
  94   LWP 3182 "rpc worker-3182" 0x00007f53c4e8cad3 in ?? ()
  95   LWP 3183 "rpc worker-3183" 0x00007f53c4e8cad3 in ?? ()
  96   LWP 3184 "rpc worker-3184" 0x00007f53c4e8cad3 in ?? ()
  97   LWP 3185 "rpc worker-3185" 0x00007f53c4e8cad3 in ?? ()
  98   LWP 3186 "rpc worker-3186" 0x00007f53c4e8cad3 in ?? ()
  99   LWP 3187 "rpc worker-3187" 0x00007f53c4e8cad3 in ?? ()
  100  LWP 3188 "rpc worker-3188" 0x00007f53c4e8cad3 in ?? ()
  101  LWP 3189 "rpc worker-3189" 0x00007f53c4e8cad3 in ?? ()
  102  LWP 3190 "rpc worker-3190" 0x00007f53c4e8cad3 in ?? ()
  103  LWP 3191 "rpc worker-3191" 0x00007f53c4e8cad3 in ?? ()
  104  LWP 3192 "rpc worker-3192" 0x00007f53c4e8cad3 in ?? ()
  105  LWP 3193 "rpc worker-3193" 0x00007f53c4e8cad3 in ?? ()
  106  LWP 3194 "rpc worker-3194" 0x00007f53c4e8cad3 in ?? ()
  107  LWP 3195 "rpc worker-3195" 0x00007f53c4e8cad3 in ?? ()
  108  LWP 3196 "rpc worker-3196" 0x00007f53c4e8cad3 in ?? ()
  109  LWP 3197 "rpc worker-3197" 0x00007f53c4e8cad3 in ?? ()
  110  LWP 3198 "rpc worker-3198" 0x00007f53c4e8cad3 in ?? ()
  111  LWP 3199 "rpc worker-3199" 0x00007f53c4e8cad3 in ?? ()
  112  LWP 3200 "rpc worker-3200" 0x00007f53c4e8cad3 in ?? ()
  113  LWP 3201 "rpc worker-3201" 0x00007f53c4e8cad3 in ?? ()
  114  LWP 3202 "rpc worker-3202" 0x00007f53c4e8cad3 in ?? ()
  115  LWP 3203 "rpc worker-3203" 0x00007f53c4e8cad3 in ?? ()
  116  LWP 3204 "rpc worker-3204" 0x00007f53c4e8cad3 in ?? ()
  117  LWP 3205 "diag-logger-320" 0x00007f53c4e8cfb9 in ?? ()
  118  LWP 3206 "result-tracker-" 0x00007f53c4e8cfb9 in ?? ()
  119  LWP 3207 "excess-log-dele" 0x00007f53c4e8cfb9 in ?? ()
  120  LWP 3208 "acceptor-3208" 0x00007f53bd9d1fc7 in ?? ()
  121  LWP 3209 "heartbeat-3209" 0x00007f53c4e8cfb9 in ?? ()
  122  LWP 3210 "maintenance_sch" 0x00007f53c4e8cfb9 in ?? ()

Thread 122 (LWP 3210):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f5379cb0700 in ?? ()
#2  0x0000000000000087 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000616000016ca0 in ?? ()
#5  0x00007f5379cb0750 in ?? ()
#6  0x000000000000010e in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 121 (LWP 3209):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x4b5301aec691978b in ?? ()
#2  0x0000000000000025 in ?? ()
#3  0x0000000100000081 in ?? ()
#4  0x000061300001d844 in ?? ()
#5  0x00007f537a4c8610 in ?? ()
#6  0x000000000000004b in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x00007f537a4c8630 in ?? ()
#9  0x0000000200000000 in ?? ()
#10 0x0000008000000189 in ?? ()
#11 0x00007f537a4c86b0 in ?? ()
#12 0x00000fea6f4990d8 in ?? ()
#13 0x0000000000000000 in ?? ()

Thread 120 (LWP 3208):
#0  0x00007f53bd9d1fc7 in ?? ()
#1  0x00006140000224a8 in ?? ()
#2  0x00007f537acefd70 in ?? ()
#3  0x00007f537acefda0 in ?? ()
#4  0x00007f537acefea0 in ?? ()
#5  0x00007f537acefd90 in ?? ()
#6  0x00007f537acefe00 in ?? ()
#7  0x0000000000000080 in ?? ()
#8  0x00000000008d957b in __sanitizer::theDepot ()
#9  0x0000000500000014 in ?? ()
#10 0x00007f537aceff20 in ?? ()
#11 0x00007f537acef65c in ?? ()
#12 0x000000327acef5d0 in ?? ()
#13 0x00007f537a4f3000 in ?? ()
#14 0x0000000000000000 in ?? ()

Thread 119 (LWP 3207):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f537b509f60 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 118 (LWP 3206):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f537bd22120 in ?? ()
#2  0x0000000000000021 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061100008caf0 in ?? ()
#5  0x00007f537bd22110 in ?? ()
#6  0x0000000000000042 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 117 (LWP 3205):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 116 (LWP 3204):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 115 (LWP 3203):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 114 (LWP 3202):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 113 (LWP 3201):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 112 (LWP 3200):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 111 (LWP 3199):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 110 (LWP 3198):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 109 (LWP 3197):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 108 (LWP 3196):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 107 (LWP 3195):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 106 (LWP 3194):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 105 (LWP 3193):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 104 (LWP 3192):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 103 (LWP 3191):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 102 (LWP 3190):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 101 (LWP 3189):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 100 (LWP 3188):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 99 (LWP 3187):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 98 (LWP 3186):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 97 (LWP 3185):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 96 (LWP 3184):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 95 (LWP 3183):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 94 (LWP 3182):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 93 (LWP 3181):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 92 (LWP 3180):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 91 (LWP 3179):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 90 (LWP 3178):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 89 (LWP 3177):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 88 (LWP 3176):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 87 (LWP 3175):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 86 (LWP 3174):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 85 (LWP 3173):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 84 (LWP 3172):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 83 (LWP 3171):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 82 (LWP 3170):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 81 (LWP 3169):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 80 (LWP 3168):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 79 (LWP 3167):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 78 (LWP 3166):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 77 (LWP 3165):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 76 (LWP 3164):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000941 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0001a688c in ?? ()
#4  0x00007f5391122eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f5391122ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0001a6840 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f5391122ed0 in ?? ()
#11 0x00007f5391122e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 75 (LWP 3163):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000856 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d0001a0088 in ?? ()
#4  0x00007f539193aeb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f539193aed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 74 (LWP 3162):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 73 (LWP 3161):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 72 (LWP 3160):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 71 (LWP 3159):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 70 (LWP 3158):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 69 (LWP 3157):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 68 (LWP 3156):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 67 (LWP 3155):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 66 (LWP 3154):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 65 (LWP 3153):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 64 (LWP 3152):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 63 (LWP 3151):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 62 (LWP 3150):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 61 (LWP 3149):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 60 (LWP 3148):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 59 (LWP 3147):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 58 (LWP 3146):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 57 (LWP 3145):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 56 (LWP 3144):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00012003c in ?? ()
#4  0x00007f539b303eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f539b303ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00011fff0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f539b303ed0 in ?? ()
#11 0x00007f539b303e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 55 (LWP 3143):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 54 (LWP 3142):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 53 (LWP 3141):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 52 (LWP 3140):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 51 (LWP 3139):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 50 (LWP 3138):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 49 (LWP 3137):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 48 (LWP 3136):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 47 (LWP 3135):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 46 (LWP 3134):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 45 (LWP 3133):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 44 (LWP 3132):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 43 (LWP 3131):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 42 (LWP 3130):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 41 (LWP 3129):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 40 (LWP 3128):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 39 (LWP 3127):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 38 (LWP 3126):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 37 (LWP 3125):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 36 (LWP 3124):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000081 in ?? ()
#3  0x000060d00009ffe8 in ?? ()
#4  0x00007f53a54e4eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a54e4ed0 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 35 (LWP 3123):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d0000967fc in ?? ()
#4  0x00007f53a5cfceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a5cfced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000967b0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a5cfced0 in ?? ()
#11 0x00007f53a5cfce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 34 (LWP 3122):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008fffc in ?? ()
#4  0x00007f53a6514eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a6514ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00008ffb0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a6514ed0 in ?? ()
#11 0x00007f53a6514e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 33 (LWP 3121):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008d00c in ?? ()
#4  0x00007f53a6d2ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a6d2ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00008cfc0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a6d2ced0 in ?? ()
#11 0x00007f53a6d2ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 32 (LWP 3120):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008680c in ?? ()
#4  0x00007f53a7544eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a7544ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000867c0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a7544ed0 in ?? ()
#11 0x00007f53a7544e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 31 (LWP 3119):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00008000c in ?? ()
#4  0x00007f53a7d5ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a7d5ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00007ffc0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a7d5ced0 in ?? ()
#11 0x00007f53a7d5ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 30 (LWP 3118):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00007681c in ?? ()
#4  0x00007f53a8574eb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a8574ed0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d0000767d0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a8574ed0 in ?? ()
#11 0x00007f53a8574e90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 29 (LWP 3117):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000001 in ?? ()
#2  0x0000000100000081 in ?? ()
#3  0x000060d00007001c in ?? ()
#4  0x00007f53a8d8ceb0 in ?? ()
#5  0x0000008000000000 in ?? ()
#6  0x00007f53a8d8ced0 in ?? ()
#7  0x0000000000000001 in ?? ()
#8  0x000060d00006ffd0 in ?? ()
#9  0x00007f53c4e8c770 in ?? ()
#10 0x00007f53a8d8ced0 in ?? ()
#11 0x00007f53a8d8ce90 in ?? ()
#12 0x0000000000000000 in ?? ()

Thread 28 (LWP 3116):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 27 (LWP 3115):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 26 (LWP 3114):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 25 (LWP 3113):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 24 (LWP 3112):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 23 (LWP 3111):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 22 (LWP 3110):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 21 (LWP 3109):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 20 (LWP 3108):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 19 (LWP 3107):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 18 (LWP 3106):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 17 (LWP 3105):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 16 (LWP 3104):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53af6e9ce0 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000613000020060 in ?? ()
#5  0x00007f53af6e9cd0 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 15 (LWP 3103):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x4008000000000000 in ?? ()
#2  0x0000000000000006 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200001fe98 in ?? ()
#5  0x00007f53aff11270 in ?? ()
#6  0x000000000000000c in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 14 (LWP 3102):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53b0727260 in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061800000c9a8 in ?? ()
#5  0x00007f53b0727250 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 13 (LWP 3101):
#0  0x00007f53c4e8cad3 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 12 (LWP 3100):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b1776340 in ?? ()
#2  0x000061a00000c680 in ?? ()
#3  0x00007f53b1776330 in ?? ()
#4  0x00007f53b1776540 in ?? ()
#5  0x00007f53b1776380 in ?? ()
#6  0x0000614000022698 in ?? ()
#7  0x00007f53b1776400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb97e0c34f53000 in ?? ()
#10 0x000061a00000c680 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c680 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000feaf62e6c80 in ?? ()
#17 0x00007f53b17763e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x00007f53b17763b0 in ?? ()
#20 0x3fb97e0c34f53000 in ?? ()
#21 0x00000000b1776400 in ?? ()
#22 0x000061a00000c680 in ?? ()
#23 0x0000614000022698 in ?? ()
#24 0x3fb97e0c34f53000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 11 (LWP 3099):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b1f8d340 in ?? ()
#2  0x000061a00000c080 in ?? ()
#3  0x00007f53b1f8d330 in ?? ()
#4  0x00007f53b1f8d540 in ?? ()
#5  0x00007f53b1f8d380 in ?? ()
#6  0x0000614000022498 in ?? ()
#7  0x00007f53b1f8d400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb9760dbd245000 in ?? ()
#10 0x000061a00000c080 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000c080 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff284 in ?? ()
#16 0x00000feaf63e9a80 in ?? ()
#17 0x00007f53b1f8d3e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x00007f53b1f8d3b0 in ?? ()
#20 0x3fb9760dbd245000 in ?? ()
#21 0x00000000b1f8d400 in ?? ()
#22 0x000061a00000c080 in ?? ()
#23 0x0000614000022498 in ?? ()
#24 0x3fb9760dbd245000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 10 (LWP 3098):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b27a4340 in ?? ()
#2  0x000061a00000ba80 in ?? ()
#3  0x00007f53b27a4330 in ?? ()
#4  0x00007f53b27a4540 in ?? ()
#5  0x00007f53b27a4380 in ?? ()
#6  0x0000614000022298 in ?? ()
#7  0x00007f53b27a4400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb957c3719e1000 in ?? ()
#10 0x000061a00000ba80 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000ba80 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff285 in ?? ()
#16 0x00000feaf64ec880 in ?? ()
#17 0x00007f53b27a43e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x00007f53b27a43b0 in ?? ()
#20 0x3fb957c3719e1000 in ?? ()
#21 0x00000000b27a4400 in ?? ()
#22 0x000061a00000ba80 in ?? ()
#23 0x0000614000022298 in ?? ()
#24 0x3fb957c3719e1000 in ?? ()
#25 0x0000000000000000 in ?? ()

Thread 9 (LWP 3097):
#0  0x00007f53bd9d0947 in ?? ()
#1  0x00007f53b47ad340 in ?? ()
#2  0x000061a00000b480 in ?? ()
#3  0x00007f53b47ad330 in ?? ()
#4  0x00007f53b47ad540 in ?? ()
#5  0x00007f53b47ad380 in ?? ()
#6  0x0000614000022098 in ?? ()
#7  0x00007f53b47ad400 in ?? ()
#8  0x00007f53c004525d in ?? ()
#9  0x3fb976a135e78000 in ?? ()
#10 0x000061a00000b480 in ?? ()
#11 0x42a2309ce5400000 in ?? ()
#12 0x000061a00000b480 in ?? ()
#13 0x00000000c87d93d0 in ?? ()
#14 0x00007f5300000000 in ?? ()
#15 0x41da7cc26b6ff287 in ?? ()
#16 0x00000feaf68eda80 in ?? ()
#17 0x00007f53b47ad3e0 in ?? ()
#18 0x00007f53c0049ba3 in ?? ()
#19 0x0000000000000000 in ?? ()

Thread 8 (LWP 3094):
#0  0x00007f53bd9c3bb9 in ?? ()
#1  0x00000000000000c8 in ?? ()
#2  0x00007f53b61b77b8 in ?? ()
#3  0x000060200001c790 in ?? ()
#4  0x0000000000000002 in ?? ()
#5  0x00000000000000c8 in ?? ()
#6  0x00000000008d11c1 in __sanitizer::theDepot ()
#7  0x0000000000000000 in ?? ()

Thread 7 (LWP 3093):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53b59b60e0 in ?? ()
#2  0x0000000000000000 in ?? ()

Thread 6 (LWP 3092):
#0  0x00007f53c4e909e2 in ?? ()
#1  0x00007f53b51b3bc0 in ?? ()
#2  0x00007f53b51b3c60 in ?? ()
#3  0x000000000000001e in ?? ()
#4  0x0000000000000030 in ?? ()
#5  0x00007f53b51b3c10 in ?? ()
#6  0x00000000017d0860 in ?? ()
#7  0x00007f53b51b3c70 in ?? ()
#8  0x000061100008d450 in ?? ()
#9  0x00007f53b51b3c60 in ?? ()
#10 0x00000000008cb6b7 in __sanitizer::theDepot ()
#11 0x00007f53ca839bfc in ?? ()
#12 0x00007f53ca829209 in ?? ()
#13 0x00007f53ca82d7f6 in ?? ()
#14 0x00007f53ca832230 in ?? ()
#15 0x00007f53ca832059 in ?? ()
#16 0x0000000000aa4cad in __sanitizer::theDepot ()
#17 0x00007f53c18b8529 in ?? ()
#18 0x00007f53c4e866db in ?? ()
#19 0x00000feaf6a2e688 in ?? ()
#20 0x00007f53b51b3460 in ?? ()
#21 0x0000000000000000 in ?? ()

Thread 5 (LWP 3086):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x00007f53b71b8ca0 in ?? ()
#2  0x00000000000000aa in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x000061200000c728 in ?? ()
#5  0x00007f53b71b8c90 in ?? ()
#6  0x0000000000000154 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 4 (LWP 3085):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 3 (LWP 3084):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x0000000000000000 in ?? ()

Thread 2 (LWP 3083):
#0  0x00007f53c4e8cfb9 in ?? ()
#1  0x5f5347414c46000a in ?? ()
#2  0x0000000000000003 in ?? ()
#3  0x0000000000000081 in ?? ()
#4  0x0000612000012b38 in ?? ()
#5  0x00007f53b89bc450 in ?? ()
#6  0x0000000000000006 in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 1 (LWP 3082):
#0  0x00007f53c4e90d50 in ?? ()
#1  0x0000000000000000 in ?? ()
************************* END STACKS ***************************
I20260430 07:55:16.570231   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2820
I20260430 07:55:16.572376  2942 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:16.887586   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2820
W20260430 07:55:16.949855  3100 connection.cc:570] server connection from 127.0.105.1:35387 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20260430 07:55:16.950189  2968 connection.cc:570] server connection from 127.0.105.1:38369 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:55:16.950459   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2951
W20260430 07:55:16.950716  2747 connection.cc:570] server connection from 127.0.105.1:58277 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:55:16.952404  3073 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:17.205557   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2951
I20260430 07:55:17.262499   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3082
I20260430 07:55:17.264292  3204 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:17.537568   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3082
I20260430 07:55:17.597476   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 2730
I20260430 07:55:17.598734  2791 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:17.715852   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 2730
2026-04-30T07:55:17Z chronyd exiting
I20260430 07:55:17.755309   420 test_util.cc:182] -----------------------------------------------
I20260430 07:55:17.755452   420 test_util.cc:183] Had failures, leaving test files at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate.1777535629930685-420-0
[  FAILED  ] TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate (38341 ms)
[----------] 4 tests from TabletCopyITest (87737 ms total)

[----------] 1 test from FaultFlags/BadTabletCopyITest
[ RUN      ] FaultFlags/BadTabletCopyITest.TestBadCopy/1
2026-04-30T07:55:17Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2026-04-30T07:55:17Z Disabled control of system clock
I20260430 07:55:17.814373   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--webserver_interface=127.0.105.62
--webserver_port=0
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000 with env {}
W20260430 07:55:18.222046  3383 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:18.222405  3383 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:18.222472  3383 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:18.231815  3383 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:55:18.231948  3383 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:18.232002  3383 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:55:18.232046  3383 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:55:18.243646  3383 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:18.245710  3383 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:18.247696  3383 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:18.258946  3389 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:18.259001  3388 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:18.259801  3391 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:18.262367  3383 server_base.cc:1061] running on GCE node
I20260430 07:55:18.263582  3383 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:18.265419  3383 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:18.266700  3383 hybrid_clock.cc:648] HybridClock initialized: now 1777535718266648 us; error 43 us; skew 500 ppm
I20260430 07:55:18.267269  3383 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:18.270033  3383 webserver.cc:492] Webserver started at http://127.0.105.62:42371/ using document root <none> and password file <none>
I20260430 07:55:18.271138  3383 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:18.271242  3383 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:18.271684  3383 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:55:18.274690  3383 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/instance:
uuid: "3a75644a42d64405ae1dd61a955fbcc9"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:18.275668  3383 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal/instance:
uuid: "3a75644a42d64405ae1dd61a955fbcc9"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:18.282763  3383 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.008s	sys 0.000s
I20260430 07:55:18.287225  3397 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:18.289934  3383 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.006s	sys 0.000s
I20260430 07:55:18.290213  3383 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "3a75644a42d64405ae1dd61a955fbcc9"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:18.290462  3383 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:18.307379  3383 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:18.308360  3383 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:18.308688  3383 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:18.332129  3383 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:33525
I20260430 07:55:18.332156  3448 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:33525 every 8 connection(s)
I20260430 07:55:18.333915  3383 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:55:18.336282   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 3383
I20260430 07:55:18.336509   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal/instance
I20260430 07:55:18.340232  3449 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:18.351615  3449 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap starting.
I20260430 07:55:18.355726  3449 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:18.357060  3449 log.cc:826] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:18.361390  3449 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: No bootstrap required, opened a new log
I20260430 07:55:18.367221  3449 raft_consensus.cc:359] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:18.367663  3449 raft_consensus.cc:385] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:18.367838  3449 raft_consensus.cc:740] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Initialized, Role: FOLLOWER
I20260430 07:55:18.368485  3449 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:18.368676  3449 raft_consensus.cc:399] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:55:18.368867  3449 raft_consensus.cc:493] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:55:18.369025  3449 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:18.371340  3449 raft_consensus.cc:515] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:18.371953  3449 leader_election.cc:304] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3a75644a42d64405ae1dd61a955fbcc9; no voters: 
I20260430 07:55:18.372607  3449 leader_election.cc:290] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20260430 07:55:18.372891  3454 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:55:18.373636  3454 raft_consensus.cc:697] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 LEADER]: Becoming Leader. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Running, Role: LEADER
I20260430 07:55:18.374312  3454 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:18.375128  3449 sys_catalog.cc:565] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:55:18.377720  3456 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3a75644a42d64405ae1dd61a955fbcc9. Latest consensus state: current_term: 1 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:18.378041  3456 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:18.378549  3455 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:18.378846  3455 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:18.379395  3463 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:55:18.385885  3463 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:55:18.397143  3463 catalog_manager.cc:1357] Generated new cluster ID: c893b49e53f64173a896b4b2dba678c2
I20260430 07:55:18.397362  3463 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:55:18.418486  3463 catalog_manager.cc:1380] Generated new certificate authority record
I20260430 07:55:18.420061  3463 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:55:18.434306  3463 catalog_manager.cc:6044] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Generated new TSK 0
I20260430 07:55:18.435307  3463 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:55:18.444705   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:0
--local_ip_for_outbound_sockets=127.0.105.1
--webserver_interface=127.0.105.1
--webserver_port=0
--tserver_master_addrs=127.0.105.62:33525
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0 with env {}
W20260430 07:55:18.846544  3473 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:18.847054  3473 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:18.847213  3473 flags.cc:432] Enabled unsafe flag: --tablet_copy_early_session_timeout_prob=1
W20260430 07:55:18.847332  3473 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:18.857393  3473 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:18.857656  3473 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:18.858108  3473 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:55:18.872134  3473 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=0
--flush_threshold_mb=0
--tablet_copy_early_session_timeout_prob=1
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:18.874365  3473 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:18.877266  3473 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:18.890882  3478 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:18.891013  3479 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:18.892675  3481 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:18.893613  3473 server_base.cc:1061] running on GCE node
I20260430 07:55:18.894912  3473 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:18.896478  3473 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:18.898183  3473 hybrid_clock.cc:648] HybridClock initialized: now 1777535718898060 us; error 95 us; skew 500 ppm
I20260430 07:55:18.898578  3473 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:18.901691  3473 webserver.cc:492] Webserver started at http://127.0.105.1:42573/ using document root <none> and password file <none>
I20260430 07:55:18.902510  3473 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:18.902616  3473 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:18.902941  3473 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:55:18.905995  3473 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/instance:
uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:18.906898  3473 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal/instance:
uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:18.913470  3473 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.003s	sys 0.004s
I20260430 07:55:18.917775  3487 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:18.919682  3473 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.001s
I20260430 07:55:18.919869  3473 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:18.920083  3473 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:18.943464  3473 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:18.944432  3473 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:18.944705  3473 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:18.945972  3473 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:18.947923  3473 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:55:18.948026  3473 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:18.948105  3473 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:55:18.948163  3473 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:18.988067  3473 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:36583
I20260430 07:55:18.988190  3599 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:36583 every 8 connection(s)
I20260430 07:55:18.989580  3473 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:55:18.990664   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 3473
I20260430 07:55:18.990890   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal/instance
I20260430 07:55:18.994710   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:0
--local_ip_for_outbound_sockets=127.0.105.2
--webserver_interface=127.0.105.2
--webserver_port=0
--tserver_master_addrs=127.0.105.62:33525
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0 with env {}
I20260430 07:55:19.005682  3600 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:19.006181  3600 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:19.007369  3600 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:19.010495  3414 ts_manager.cc:194] Registered new tserver with Master: 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583)
I20260430 07:55:19.012234  3414 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:47853
W20260430 07:55:19.440662  3604 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:19.441020  3604 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:19.441187  3604 flags.cc:432] Enabled unsafe flag: --tablet_copy_early_session_timeout_prob=1
W20260430 07:55:19.441339  3604 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:19.451223  3604 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:19.451423  3604 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:19.451663  3604 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:55:19.465338  3604 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=0
--flush_threshold_mb=0
--tablet_copy_early_session_timeout_prob=1
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:19.467063  3604 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:19.469132  3604 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:19.480705  3609 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:19.480921  3610 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:19.481231  3612 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:19.481884  3604 server_base.cc:1061] running on GCE node
I20260430 07:55:19.483311  3604 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:19.484579  3604 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:19.486011  3604 hybrid_clock.cc:648] HybridClock initialized: now 1777535719485900 us; error 52 us; skew 500 ppm
I20260430 07:55:19.486642  3604 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:19.490934  3604 webserver.cc:492] Webserver started at http://127.0.105.2:38509/ using document root <none> and password file <none>
I20260430 07:55:19.492555  3604 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:19.492722  3604 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:19.493171  3604 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:55:19.496528  3604 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/instance:
uuid: "9b6542a54f61418a894d790b5e1aa779"
format_stamp: "Formatted at 2026-04-30 07:55:19 on dist-test-slave-1g5s"
I20260430 07:55:19.497565  3604 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal/instance:
uuid: "9b6542a54f61418a894d790b5e1aa779"
format_stamp: "Formatted at 2026-04-30 07:55:19 on dist-test-slave-1g5s"
I20260430 07:55:19.504621  3604 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.005s	sys 0.000s
I20260430 07:55:19.509995  3618 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:19.512872  3604 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.001s	sys 0.003s
I20260430 07:55:19.513229  3604 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "9b6542a54f61418a894d790b5e1aa779"
format_stamp: "Formatted at 2026-04-30 07:55:19 on dist-test-slave-1g5s"
I20260430 07:55:19.513581  3604 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:19.530400  3604 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:19.531328  3604 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:19.531637  3604 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:19.532866  3604 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:19.535013  3604 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:55:19.535120  3604 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:19.535245  3604 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:55:19.535331  3604 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:19.581967  3604 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:36961
I20260430 07:55:19.582032  3730 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:36961 every 8 connection(s)
I20260430 07:55:19.583523  3604 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:55:19.588233   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 3604
I20260430 07:55:19.588487   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal/instance
I20260430 07:55:19.593852   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.3:0
--local_ip_for_outbound_sockets=127.0.105.3
--webserver_interface=127.0.105.3
--webserver_port=0
--tserver_master_addrs=127.0.105.62:33525
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0 with env {}
I20260430 07:55:19.603780  3731 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:19.604554  3731 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:19.605954  3731 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:19.608302  3414 ts_manager.cc:194] Registered new tserver with Master: 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:19.609174  3414 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:44663
W20260430 07:55:20.000221  3735 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:20.000743  3735 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:20.000909  3735 flags.cc:432] Enabled unsafe flag: --tablet_copy_early_session_timeout_prob=1
W20260430 07:55:20.001071  3735 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:20.010867  3735 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:20.011106  3735 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:20.011257  3735 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.3
I20260430 07:55:20.017417  3600 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:20.022862  3735 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.3:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.105.3
--webserver_port=0
--flush_threshold_mb=0
--tablet_copy_early_session_timeout_prob=1
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.3
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:20.024600  3735 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:20.026743  3735 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:20.038048  3740 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:20.038023  3743 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:20.038211  3741 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:20.039371  3735 server_base.cc:1061] running on GCE node
I20260430 07:55:20.040318  3735 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:20.041602  3735 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:20.043001  3735 hybrid_clock.cc:648] HybridClock initialized: now 1777535720042863 us; error 70 us; skew 500 ppm
I20260430 07:55:20.043429  3735 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:20.046586  3735 webserver.cc:492] Webserver started at http://127.0.105.3:34277/ using document root <none> and password file <none>
I20260430 07:55:20.048148  3735 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:20.048393  3735 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:20.049299  3735 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:55:20.052269  3735 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/instance:
uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:20.053253  3735 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal/instance:
uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:20.059837  3735 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.000s
I20260430 07:55:20.065110  3749 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:20.067085  3735 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.001s	sys 0.002s
I20260430 07:55:20.067282  3735 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:20.067576  3735 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:20.120414  3735 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:20.121770  3735 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:20.122098  3735 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:20.123257  3735 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:20.125140  3735 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:55:20.125334  3735 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.001s	sys 0.000s
I20260430 07:55:20.125581  3735 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:55:20.125695  3735 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:20.173156  3735 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.3:40671
I20260430 07:55:20.173257  3861 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.3:40671 every 8 connection(s)
I20260430 07:55:20.174762  3735 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
I20260430 07:55:20.177853   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 3735
I20260430 07:55:20.178122   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal/instance
I20260430 07:55:20.181981   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.4:0
--local_ip_for_outbound_sockets=127.0.105.4
--webserver_interface=127.0.105.4
--webserver_port=0
--tserver_master_addrs=127.0.105.62:33525
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0 with env {}
I20260430 07:55:20.191890  3862 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:20.192327  3862 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:20.193508  3862 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:20.196059  3414 ts_manager.cc:194] Registered new tserver with Master: ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:20.197248  3414 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.3:33063
W20260430 07:55:20.596942  3866 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:20.597406  3866 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:20.597555  3866 flags.cc:432] Enabled unsafe flag: --tablet_copy_early_session_timeout_prob=1
W20260430 07:55:20.597654  3866 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:20.607522  3866 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:20.607724  3866 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:20.607904  3866 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.4
I20260430 07:55:20.613440  3731 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:20.620693  3866 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.4:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--webserver_interface=127.0.105.4
--webserver_port=0
--flush_threshold_mb=0
--tablet_copy_early_session_timeout_prob=1
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.4
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:20.622480  3866 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:20.624370  3866 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:20.635182  3871 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:20.635126  3872 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:20.635777  3874 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:20.638227  3866 server_base.cc:1061] running on GCE node
I20260430 07:55:20.639102  3866 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:20.640158  3866 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:20.641409  3866 hybrid_clock.cc:648] HybridClock initialized: now 1777535720641359 us; error 37 us; skew 500 ppm
I20260430 07:55:20.641950  3866 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:20.644773  3866 webserver.cc:492] Webserver started at http://127.0.105.4:38779/ using document root <none> and password file <none>
I20260430 07:55:20.645777  3866 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:20.645895  3866 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:20.646235  3866 server_base.cc:909] This appears to be a new deployment of Kudu; creating new FS layout
I20260430 07:55:20.649109  3866 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/instance:
uuid: "2e401b3aecfd46378718b182a4bec89f"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:20.650009  3866 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal/instance:
uuid: "2e401b3aecfd46378718b182a4bec89f"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:20.657423  3866 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.003s	sys 0.004s
I20260430 07:55:20.662067  3880 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:20.664122  3866 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.000s
I20260430 07:55:20.664371  3866 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
uuid: "2e401b3aecfd46378718b182a4bec89f"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:20.664618  3866 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:20.698627  3866 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:20.699591  3866 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:20.699949  3866 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:20.701267  3866 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:20.703387  3866 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:55:20.703537  3866 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:20.703696  3866 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:55:20.703785  3866 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:20.770241  3866 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.4:39005
I20260430 07:55:20.770280  3992 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.4:39005 every 8 connection(s)
I20260430 07:55:20.772511  3866 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
I20260430 07:55:20.773259   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 3866
I20260430 07:55:20.773476   420 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal/instance
I20260430 07:55:20.789507  3993 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:20.790030  3993 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:20.791147  3993 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:20.793093  3413 ts_manager.cc:194] Registered new tserver with Master: 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005)
I20260430 07:55:20.794148  3413 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.4:53863
I20260430 07:55:20.808960   420 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20260430 07:55:20.827728   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3866
I20260430 07:55:20.838985  3988 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:20.939869   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3866
I20260430 07:55:20.958999   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3383
I20260430 07:55:20.960577  3444 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:21.084115   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3383
I20260430 07:55:21.110461   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--webserver_interface=127.0.105.62
--webserver_port=42371
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000 with env {}
I20260430 07:55:21.201216  3862 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
W20260430 07:55:21.203112  3862 heartbeater.cc:646] Failed to heartbeat to 127.0.105.62:33525 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.105.62:33525: connect: Connection refused (error 111)
W20260430 07:55:21.517637  4006 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:21.518095  4006 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:21.518242  4006 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:21.528434  4006 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:55:21.528620  4006 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:21.528748  4006 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:55:21.528841  4006 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:55:21.539973  4006 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=42371
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:21.541903  4006 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:21.544013  4006 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:21.555033  4012 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:21.555213  4013 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:21.556103  4015 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:21.556423  4006 server_base.cc:1061] running on GCE node
I20260430 07:55:21.558178  4006 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:21.560271  4006 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:21.561872  4006 hybrid_clock.cc:648] HybridClock initialized: now 1777535721561799 us; error 91 us; skew 500 ppm
I20260430 07:55:21.562366  4006 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:21.565495  4006 webserver.cc:492] Webserver started at http://127.0.105.62:42371/ using document root <none> and password file <none>
I20260430 07:55:21.566370  4006 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:21.566499  4006 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:21.573165  4006 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.003s	sys 0.003s
I20260430 07:55:21.578058  4021 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:21.579892  4006 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.003s	sys 0.001s
I20260430 07:55:21.580152  4006 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "3a75644a42d64405ae1dd61a955fbcc9"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:21.580957  4006 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:21.612116  4006 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:21.613279  4006 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:21.613648  4006 kserver.cc:163] Server-wide thread pool size limit: 3276
W20260430 07:55:21.618798  3731 heartbeater.cc:646] Failed to heartbeat to 127.0.105.62:33525 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.105.62:33525: connect: Connection refused (error 111)
I20260430 07:55:21.640983  4006 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:33525
I20260430 07:55:21.641026  4073 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:33525 every 8 connection(s)
I20260430 07:55:21.642895  4006 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:55:21.647042   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 4006
I20260430 07:55:21.655428  4074 sys_catalog.cc:263] Verifying existing consensus state
I20260430 07:55:21.660677  4074 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap starting.
I20260430 07:55:21.689054  4074 log.cc:826] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:21.701015  4074 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=3 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:21.701735  4074 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap complete.
I20260430 07:55:21.708830  4074 raft_consensus.cc:359] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:21.709686  4074 raft_consensus.cc:740] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Initialized, Role: FOLLOWER
I20260430 07:55:21.710350  4074 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:21.710638  4074 raft_consensus.cc:399] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:55:21.710776  4074 raft_consensus.cc:493] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:55:21.711143  4074 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:55:21.716509  4074 raft_consensus.cc:515] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:21.717164  4074 leader_election.cc:304] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3a75644a42d64405ae1dd61a955fbcc9; no voters: 
I20260430 07:55:21.718081  4074 leader_election.cc:290] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 2 election: Requested vote from peers 
I20260430 07:55:21.718254  4079 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Leader election won for term 2
I20260430 07:55:21.719110  4079 raft_consensus.cc:697] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 LEADER]: Becoming Leader. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Running, Role: LEADER
I20260430 07:55:21.719714  4079 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:21.720322  4074 sys_catalog.cc:565] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:55:21.723174  4080 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:21.725525  4080 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:21.722940  4081 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3a75644a42d64405ae1dd61a955fbcc9. Latest consensus state: current_term: 2 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:21.726349  4081 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:21.729327  4088 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:55:21.735034  4088 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:55:21.736654  4088 catalog_manager.cc:1269] Loaded cluster ID: c893b49e53f64173a896b4b2dba678c2
I20260430 07:55:21.736825  4088 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:55:21.740216  4088 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:55:21.741611  4088 catalog_manager.cc:6055] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Loaded TSK: 0
I20260430 07:55:21.743048  4088 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:55:22.095125  4039 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" instance_seqno: 1777535718978733) as {username='slave'} at 127.0.105.1:46273; Asking this server to re-register.
I20260430 07:55:22.096125  3600 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:22.096432  3600 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:22.098021  4039 ts_manager.cc:194] Registered new tserver with Master: 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583)
I20260430 07:55:22.211818  3862 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:22.212070  3862 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:22.213343  4039 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" instance_seqno: 1777535720162926) as {username='slave'} at 127.0.105.3:34885; Asking this server to re-register.
I20260430 07:55:22.214133  3862 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:22.214370  3862 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:22.215322  4039 ts_manager.cc:194] Registered new tserver with Master: ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:22.626919  3731 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:22.628150  4039 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" instance_seqno: 1777535719571248) as {username='slave'} at 127.0.105.2:57467; Asking this server to re-register.
I20260430 07:55:22.629009  3731 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:22.629339  3731 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:22.630626  4039 ts_manager.cc:194] Registered new tserver with Master: 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:22.690084  4039 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:40130:
name: "table_a"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20260430 07:55:22.692530  4039 catalog_manager.cc:7033] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table table_a in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20260430 07:55:22.725699  3666 tablet_service.cc:1511] Processing CreateTablet for tablet 0d01e5768e6f435695871abd9deaee86 (DEFAULT_TABLE table=table_a [id=ff76305e07ff4c568227a9fd62f5f0d9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:55:22.727635  3666 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0d01e5768e6f435695871abd9deaee86. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:22.727522  3535 tablet_service.cc:1511] Processing CreateTablet for tablet 0d01e5768e6f435695871abd9deaee86 (DEFAULT_TABLE table=table_a [id=ff76305e07ff4c568227a9fd62f5f0d9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:55:22.729338  3535 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0d01e5768e6f435695871abd9deaee86. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:22.733861  3797 tablet_service.cc:1511] Processing CreateTablet for tablet 0d01e5768e6f435695871abd9deaee86 (DEFAULT_TABLE table=table_a [id=ff76305e07ff4c568227a9fd62f5f0d9]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:55:22.735514  3797 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0d01e5768e6f435695871abd9deaee86. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:22.743266  4114 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap starting.
I20260430 07:55:22.744235  4115 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Bootstrap starting.
I20260430 07:55:22.747905  4116 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap starting.
I20260430 07:55:22.748538  4115 tablet_bootstrap.cc:654] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:22.748539  4114 tablet_bootstrap.cc:654] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:22.750571  4115 log.cc:826] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:22.750972  4114 log.cc:826] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:22.754082  4116 tablet_bootstrap.cc:654] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:22.755342  4115 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: No bootstrap required, opened a new log
I20260430 07:55:22.755638  4114 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: No bootstrap required, opened a new log
I20260430 07:55:22.755695  4115 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Time spent bootstrapping tablet: real 0.012s	user 0.004s	sys 0.004s
I20260430 07:55:22.756099  4114 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Time spent bootstrapping tablet: real 0.013s	user 0.001s	sys 0.008s
I20260430 07:55:22.756374  4116 log.cc:826] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:22.760984  4116 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: No bootstrap required, opened a new log
I20260430 07:55:22.761852  4116 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent bootstrapping tablet: real 0.014s	user 0.009s	sys 0.003s
I20260430 07:55:22.763947  4115 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.764488  4115 raft_consensus.cc:385] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:22.764626  4115 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Initialized, Role: FOLLOWER
I20260430 07:55:22.766233  4115 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.766423  4114 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.766943  4114 raft_consensus.cc:385] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:22.767074  4114 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0696ac6914f940f2bcdc99c5d5c3d0e5, State: Initialized, Role: FOLLOWER
I20260430 07:55:22.767895  4114 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.769917  4114 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Time spent starting tablet: real 0.014s	user 0.016s	sys 0.000s
W20260430 07:55:22.771605  3601 tablet.cc:2404] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:55:22.771533  4116 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.772011  4116 raft_consensus.cc:385] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:22.772120  4116 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea6bd19fddfe4b988f4682a0bdec2adc, State: Initialized, Role: FOLLOWER
I20260430 07:55:22.772920  4116 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.776046  4115 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Time spent starting tablet: real 0.020s	user 0.014s	sys 0.002s
W20260430 07:55:22.776255  3732 tablet.cc:2404] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:55:22.776922  4116 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent starting tablet: real 0.015s	user 0.009s	sys 0.004s
W20260430 07:55:22.781729  3863 tablet.cc:2404] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:55:22.927402  4121 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:55:22.927774  4121 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.929932  4121 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:22.938202  3686 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779" is_pre_election: true
I20260430 07:55:22.938787  3686 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0696ac6914f940f2bcdc99c5d5c3d0e5 in term 0.
I20260430 07:55:22.939574  3490 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0696ac6914f940f2bcdc99c5d5c3d0e5, 9b6542a54f61418a894d790b5e1aa779; no voters: 
I20260430 07:55:22.939436  3817 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:22.939986  4121 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20260430 07:55:22.940042  3817 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0696ac6914f940f2bcdc99c5d5c3d0e5 in term 0.
I20260430 07:55:22.940110  4121 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:55:22.940246  4121 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:22.943192  4121 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.943943  4121 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 1 election: Requested vote from peers 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:22.944607  3817 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
I20260430 07:55:22.944756  3686 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779"
I20260430 07:55:22.944882  3817 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:22.944998  3686 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:22.947743  3817 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0696ac6914f940f2bcdc99c5d5c3d0e5 in term 1.
I20260430 07:55:22.947836  3686 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0696ac6914f940f2bcdc99c5d5c3d0e5 in term 1.
I20260430 07:55:22.948356  3490 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0696ac6914f940f2bcdc99c5d5c3d0e5, 9b6542a54f61418a894d790b5e1aa779; no voters: 
I20260430 07:55:22.948886  4121 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:55:22.949571  4121 raft_consensus.cc:697] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 LEADER]: Becoming Leader. State: Replica: 0696ac6914f940f2bcdc99c5d5c3d0e5, State: Running, Role: LEADER
I20260430 07:55:22.950377  4121 consensus_queue.cc:237] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:22.957147  4037 catalog_manager.cc:5671] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 reported cstate change: term changed from 0 to 1, leader changed from <none> to 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1). New cstate: current_term: 1 leader_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } health_report { overall_health: UNKNOWN } } }
I20260430 07:55:23.015424   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.4:39005
--local_ip_for_outbound_sockets=127.0.105.4
--tserver_master_addrs=127.0.105.62:33525
--webserver_port=38779
--webserver_interface=127.0.105.4
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0 with env {}
I20260430 07:55:23.362866  4121 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:55:23.378401  4121 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20260430 07:55:23.441452  4127 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:23.441844  4127 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:23.441982  4127 flags.cc:432] Enabled unsafe flag: --tablet_copy_early_session_timeout_prob=1
W20260430 07:55:23.442091  4127 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:23.451884  4127 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:23.452135  4127 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:23.452292  4127 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.4
I20260430 07:55:23.464607  4127 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.4:39005
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--webserver_interface=127.0.105.4
--webserver_port=38779
--flush_threshold_mb=0
--tablet_copy_early_session_timeout_prob=1
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.4
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:23.466668  4127 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:23.468726  4127 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:23.481068  4139 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:23.482087  4142 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:23.482213  4140 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:23.483860  4127 server_base.cc:1061] running on GCE node
I20260430 07:55:23.484592  4127 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:23.485810  4127 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:23.487627  4127 hybrid_clock.cc:648] HybridClock initialized: now 1777535723487487 us; error 105 us; skew 500 ppm
I20260430 07:55:23.488191  4127 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:23.491313  4127 webserver.cc:492] Webserver started at http://127.0.105.4:38779/ using document root <none> and password file <none>
I20260430 07:55:23.492288  4127 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:23.492489  4127 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:23.500168  4127 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.005s	sys 0.000s
I20260430 07:55:23.504256  4148 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:23.506269  4127 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.000s	sys 0.002s
I20260430 07:55:23.506513  4127 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
uuid: "2e401b3aecfd46378718b182a4bec89f"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:23.507280  4127 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:23.522809  4127 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:23.523784  4127 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:23.524188  4127 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:23.525285  4127 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:23.527235  4127 ts_tablet_manager.cc:585] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20260430 07:55:23.527330  4127 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:23.527428  4127 ts_tablet_manager.cc:616] Registered 0 tablets
I20260430 07:55:23.527490  4127 ts_tablet_manager.cc:595] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:23.574085  4127 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.4:39005
I20260430 07:55:23.574121  4260 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.4:39005 every 8 connection(s)
I20260430 07:55:23.576884  4127 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
I20260430 07:55:23.584255   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 4127
I20260430 07:55:23.608201  4261 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:23.609541  4261 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:23.611065  4261 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:23.613794  4037 ts_manager.cc:194] Registered new tserver with Master: 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005)
I20260430 07:55:23.615196  4037 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.4:54387
I20260430 07:55:23.668170   420 tablet_copy-itest.cc:1499] Blocks diff: 0
I20260430 07:55:23.743971  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:23.746866  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:23.750309  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:23.845248  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.080s	user 0.029s	sys 0.026s Metrics: {"bytes_written":5984,"cfile_init":1,"compiler_manager_pool.queue_time_us":12271,"dirs.queue_time_us":6910,"dirs.run_cpu_time_us":388,"dirs.run_wall_time_us":5191,"drs_written":1,"lbm_read_time_us":136,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1211,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":50,"thread_start_us":19376,"threads_started":2}
I20260430 07:55:23.850734  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:23.944531  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.092s	user 0.038s	sys 0.003s Metrics: {"bytes_written":13804,"cfile_init":1,"dirs.queue_time_us":57,"dirs.run_cpu_time_us":218,"dirs.run_wall_time_us":1793,"drs_written":1,"lbm_read_time_us":92,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1267,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":350}
I20260430 07:55:23.946506  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 3594 bytes on disk
I20260430 07:55:23.951049  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.001s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":179,"lbm_reads_lt_1ms":8}
I20260430 07:55:23.952557  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014116
I20260430 07:55:23.975726  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.228s	user 0.041s	sys 0.008s Metrics: {"bytes_written":5984,"cfile_init":1,"compiler_manager_pool.queue_time_us":20416,"dirs.queue_time_us":6440,"dirs.run_cpu_time_us":336,"dirs.run_wall_time_us":6963,"drs_written":1,"lbm_read_time_us":126,"lbm_reads_lt_1ms":4,"lbm_write_time_us":944,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":50,"spinlock_wait_cycles":306560,"thread_start_us":34660,"threads_started":2}
I20260430 07:55:23.977813  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 450 bytes on disk
I20260430 07:55:23.978868  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":115,"lbm_reads_lt_1ms":4}
I20260430 07:55:23.979816  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:24.073928  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.121s	user 0.042s	sys 0.004s Metrics: {"bytes_written":15083,"cfile_cache_hit":10,"cfile_cache_hit_bytes":7516,"cfile_cache_miss":26,"cfile_cache_miss_bytes":21218,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":7791,"dirs.run_cpu_time_us":1320,"dirs.run_wall_time_us":7052,"drs_written":1,"lbm_read_time_us":6208,"lbm_reads_1-10_ms":2,"lbm_reads_lt_1ms":44,"lbm_write_time_us":897,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":8692,"rows_written":400,"thread_start_us":19176,"threads_started":2}
I20260430 07:55:24.075222  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 3562 bytes on disk
I20260430 07:55:24.076073  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":102,"lbm_reads_lt_1ms":4}
I20260430 07:55:24.076804  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:24.096840  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.352s	user 0.043s	sys 0.000s Metrics: {"bytes_written":5984,"cfile_init":1,"compiler_manager_pool.queue_time_us":4558,"dirs.queue_time_us":20654,"dirs.run_cpu_time_us":243,"dirs.run_wall_time_us":2418,"drs_written":1,"lbm_read_time_us":637,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1122,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":50,"spinlock_wait_cycles":257280,"thread_start_us":34086,"threads_started":2}
I20260430 07:55:24.099128  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 450 bytes on disk
I20260430 07:55:24.129966  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":120,"lbm_reads_lt_1ms":4}
I20260430 07:55:24.134398  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:24.151237  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.171s	user 0.061s	sys 0.003s Metrics: {"bytes_written":25425,"cfile_init":1,"dirs.queue_time_us":54,"dirs.run_cpu_time_us":227,"dirs.run_wall_time_us":2388,"drs_written":1,"lbm_read_time_us":95,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1259,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":800}
I20260430 07:55:24.153061  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 6957 bytes on disk
I20260430 07:55:24.153978  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":125,"lbm_reads_lt_1ms":4}
I20260430 07:55:24.155669  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.009992
I20260430 07:55:24.270784  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.194s	user 0.043s	sys 0.008s Metrics: {"bytes_written":20277,"cfile_init":1,"dirs.queue_time_us":52,"dirs.run_cpu_time_us":167,"dirs.run_wall_time_us":1431,"drs_written":1,"lbm_read_time_us":77,"lbm_reads_lt_1ms":4,"lbm_write_time_us":989,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":600}
I20260430 07:55:24.273571  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 5277 bytes on disk
I20260430 07:55:24.274273  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":102,"lbm_reads_lt_1ms":4}
I20260430 07:55:24.275221  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.011680
I20260430 07:55:24.432539  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.157s	user 0.065s	sys 0.015s Metrics: {"bytes_written":30650,"cfile_cache_hit":10,"cfile_cache_hit_bytes":18462,"cfile_cache_miss":26,"cfile_cache_miss_bytes":51892,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":10854,"dirs.run_cpu_time_us":892,"dirs.run_wall_time_us":7651,"drs_written":1,"lbm_read_time_us":1029,"lbm_reads_lt_1ms":46,"lbm_write_time_us":21062,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":22,"mutex_wait_us":10105,"num_input_rowsets":2,"peak_mem_usage":20346,"rows_written":1000,"thread_start_us":24503,"threads_started":1}
I20260430 07:55:24.433624  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 8675 bytes on disk
I20260430 07:55:24.465659  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":99,"lbm_reads_lt_1ms":4}
I20260430 07:55:24.466585  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:24.513458  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.357s	user 0.061s	sys 0.004s Metrics: {"bytes_written":26747,"cfile_cache_hit":10,"cfile_cache_hit_bytes":15726,"cfile_cache_miss":26,"cfile_cache_miss_bytes":43876,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":23446,"dirs.run_cpu_time_us":1161,"dirs.run_wall_time_us":7987,"drs_written":1,"lbm_read_time_us":1123,"lbm_reads_lt_1ms":46,"lbm_write_time_us":1064,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":16752,"rows_written":850,"thread_start_us":39281,"threads_started":2}
I20260430 07:55:24.514842  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 7404 bytes on disk
I20260430 07:55:24.515539  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":80,"lbm_reads_lt_1ms":4}
I20260430 07:55:24.516501  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:24.624598  4261 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:24.709117   420 tablet_copy-itest.cc:1499] Blocks diff: 10
I20260430 07:55:24.806020  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.671s	user 0.085s	sys 0.008s Metrics: {"bytes_written":33168,"cfile_init":1,"dirs.queue_time_us":56129,"dirs.run_cpu_time_us":212,"dirs.run_wall_time_us":7965,"drs_written":1,"lbm_read_time_us":117,"lbm_reads_lt_1ms":4,"lbm_write_time_us":7900,"lbm_writes_1-10_ms":1,"lbm_writes_lt_1ms":22,"peak_mem_usage":0,"rows_written":1100,"thread_start_us":56314,"threads_started":1}
I20260430 07:55:24.808215  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.009893
I20260430 07:55:24.807930  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.340s	user 0.074s	sys 0.015s Metrics: {"bytes_written":34498,"cfile_init":1,"dirs.queue_time_us":9156,"dirs.run_cpu_time_us":218,"dirs.run_wall_time_us":2237,"drs_written":1,"lbm_read_time_us":87,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1546,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1150}
I20260430 07:55:24.814189  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.013008
I20260430 07:55:24.885004  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.364s	user 0.089s	sys 0.020s Metrics: {"bytes_written":40918,"cfile_init":1,"dirs.queue_time_us":157,"dirs.run_cpu_time_us":208,"dirs.run_wall_time_us":2320,"drs_written":1,"lbm_read_time_us":82,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1271,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1400}
I20260430 07:55:24.886245  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.012692
I20260430 07:55:25.039551  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.147s	user 0.100s	sys 0.004s Metrics: {"bytes_written":66843,"cfile_cache_hit":10,"cfile_cache_hit_bytes":41277,"cfile_cache_miss":26,"cfile_cache_miss_bytes":115019,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":1999,"dirs.run_cpu_time_us":1012,"dirs.run_wall_time_us":9181,"drs_written":1,"lbm_read_time_us":1100,"lbm_reads_lt_1ms":50,"lbm_write_time_us":1279,"lbm_writes_lt_1ms":24,"mutex_wait_us":4,"num_input_rowsets":2,"peak_mem_usage":36511,"rows_written":2250,"thread_start_us":23938,"threads_started":1}
I20260430 07:55:25.041078  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 19462 bytes on disk
I20260430 07:55:25.041776  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":107,"lbm_reads_lt_1ms":4}
I20260430 07:55:25.043186  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:25.063673  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.255s	user 0.066s	sys 0.000s Metrics: {"bytes_written":34539,"cfile_cache_hit":10,"cfile_cache_hit_bytes":21201,"cfile_cache_miss":26,"cfile_cache_miss_bytes":59013,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":2806,"dirs.run_cpu_time_us":197,"dirs.run_wall_time_us":1715,"drs_written":1,"lbm_read_time_us":1428,"lbm_reads_lt_1ms":50,"lbm_write_time_us":1006,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":21027,"rows_written":1150}
I20260430 07:55:25.090794  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 9957 bytes on disk
I20260430 07:55:25.092408  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":113,"lbm_reads_lt_1ms":4}
I20260430 07:55:25.123426  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:25.134037  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.317s	user 0.099s	sys 0.003s Metrics: {"bytes_written":64383,"cfile_cache_hit":10,"cfile_cache_hit_bytes":39452,"cfile_cache_miss":26,"cfile_cache_miss_bytes":110080,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":68,"dirs.run_cpu_time_us":341,"dirs.run_wall_time_us":1985,"drs_written":1,"lbm_read_time_us":1166,"lbm_reads_lt_1ms":50,"lbm_write_time_us":1094,"lbm_writes_lt_1ms":24,"num_input_rowsets":2,"peak_mem_usage":35086,"rows_written":2150}
I20260430 07:55:25.136668  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:25.224401  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.181s	user 0.084s	sys 0.019s Metrics: {"bytes_written":39557,"cfile_init":1,"dirs.queue_time_us":115,"dirs.run_cpu_time_us":344,"dirs.run_wall_time_us":2027,"drs_written":1,"lbm_read_time_us":85,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1017,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1350}
I20260430 07:55:25.226621  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 11773 bytes on disk
I20260430 07:55:25.231081  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":124,"lbm_reads_lt_1ms":4}
I20260430 07:55:25.234336  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014463
I20260430 07:55:25.640969  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.406s	user 0.163s	sys 0.018s Metrics: {"bytes_written":105242,"cfile_cache_hit":10,"cfile_cache_hit_bytes":65918,"cfile_cache_miss":26,"cfile_cache_miss_bytes":183068,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":6340,"dirs.run_cpu_time_us":1288,"dirs.run_wall_time_us":10914,"drs_written":1,"lbm_read_time_us":1965,"lbm_reads_lt_1ms":46,"lbm_write_time_us":2031,"lbm_writes_lt_1ms":26,"mutex_wait_us":1,"num_input_rowsets":2,"peak_mem_usage":54101,"rows_written":3600,"thread_start_us":39905,"threads_started":2}
I20260430 07:55:25.646489  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 32237 bytes on disk
I20260430 07:55:25.650028  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":118,"lbm_reads_lt_1ms":4}
I20260430 07:55:25.652045  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:25.724195   420 tablet_copy-itest.cc:1499] Blocks diff: 10
I20260430 07:55:25.827576  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.690s	user 0.093s	sys 0.043s Metrics: {"bytes_written":44674,"cfile_init":1,"dirs.queue_time_us":4970,"dirs.run_cpu_time_us":242,"dirs.run_wall_time_us":2867,"drs_written":1,"lbm_read_time_us":89,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1050,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1550}
I20260430 07:55:25.838951  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 32084 bytes on disk
I20260430 07:55:25.839988  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":260,"lbm_reads_lt_1ms":8}
I20260430 07:55:25.841012  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014496
I20260430 07:55:25.869887  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.217s	user 0.100s	sys 0.040s Metrics: {"bytes_written":35784,"cfile_init":1,"compiler_manager_pool.queue_time_us":9343,"compiler_manager_pool.run_cpu_time_us":10,"compiler_manager_pool.run_wall_time_us":10,"dirs.queue_time_us":1963,"dirs.run_cpu_time_us":653,"dirs.run_wall_time_us":2910,"drs_written":1,"lbm_read_time_us":91,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1245,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1200}
I20260430 07:55:25.871414  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014351
I20260430 07:55:26.135860  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 1.011s	user 0.171s	sys 0.038s Metrics: {"bytes_written":74640,"cfile_init":1,"dirs.queue_time_us":13585,"dirs.run_cpu_time_us":284,"dirs.run_wall_time_us":4186,"drs_written":1,"lbm_read_time_us":75,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1214,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":2550,"thread_start_us":14640,"threads_started":1}
I20260430 07:55:26.137737  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.012423
I20260430 07:55:26.172683  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.301s	user 0.188s	sys 0.014s Metrics: {"bytes_written":137069,"cfile_cache_hit":10,"cfile_cache_hit_bytes":87835,"cfile_cache_miss":28,"cfile_cache_miss_bytes":244975,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":73,"dirs.run_cpu_time_us":261,"dirs.run_wall_time_us":1649,"drs_written":1,"lbm_read_time_us":1465,"lbm_reads_lt_1ms":52,"lbm_write_time_us":1573,"lbm_writes_lt_1ms":27,"num_input_rowsets":2,"peak_mem_usage":75801,"rows_written":4800}
I20260430 07:55:26.188314  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:26.257742  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.416s	user 0.159s	sys 0.021s Metrics: {"bytes_written":108145,"cfile_cache_hit":10,"cfile_cache_hit_bytes":67743,"cfile_cache_miss":26,"cfile_cache_miss_bytes":188255,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":20407,"dirs.run_cpu_time_us":1620,"dirs.run_wall_time_us":11080,"drs_written":1,"lbm_read_time_us":1418,"lbm_reads_lt_1ms":46,"lbm_write_time_us":1340,"lbm_writes_lt_1ms":26,"mutex_wait_us":6071,"num_input_rowsets":2,"peak_mem_usage":55526,"rows_written":3700,"spinlock_wait_cycles":267648,"thread_start_us":104991,"threads_started":3}
I20260430 07:55:26.259011  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 33256 bytes on disk
I20260430 07:55:26.260898  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":95,"lbm_reads_lt_1ms":4}
I20260430 07:55:26.267053  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:26.382942  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.190s	user 0.069s	sys 0.012s Metrics: {"bytes_written":31868,"cfile_init":1,"dirs.queue_time_us":52,"dirs.run_cpu_time_us":186,"dirs.run_wall_time_us":1385,"drs_written":1,"lbm_read_time_us":107,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1295,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1050}
I20260430 07:55:26.383952  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.012615
I20260430 07:55:26.658190  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.390s	user 0.128s	sys 0.039s Metrics: {"bytes_written":66748,"cfile_init":1,"compiler_manager_pool.queue_time_us":420338,"compiler_manager_pool.run_cpu_time_us":16,"compiler_manager_pool.run_wall_time_us":16,"dirs.queue_time_us":60,"dirs.run_cpu_time_us":241,"dirs.run_wall_time_us":1700,"drs_written":1,"lbm_read_time_us":167,"lbm_reads_lt_1ms":4,"lbm_write_time_us":34520,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":2250}
I20260430 07:55:26.659523  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014586
I20260430 07:55:26.662834  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.525s	user 0.155s	sys 0.004s Metrics: {"bytes_written":108145,"cfile_cache_hit":10,"cfile_cache_hit_bytes":67765,"cfile_cache_miss":28,"cfile_cache_miss_bytes":189867,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":13048,"dirs.run_cpu_time_us":278,"dirs.run_wall_time_us":1747,"drs_written":1,"lbm_read_time_us":3450,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":51,"lbm_write_time_us":1528,"lbm_writes_lt_1ms":26,"num_input_rowsets":2,"peak_mem_usage":55781,"rows_written":3700,"thread_start_us":20387,"threads_started":1}
I20260430 07:55:26.663828  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:26.730003   420 tablet_copy-itest.cc:1499] Blocks diff: 10
I20260430 07:55:26.947104  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.558s	user 0.218s	sys 0.035s Metrics: {"bytes_written":168894,"cfile_cache_hit":10,"cfile_cache_hit_bytes":107728,"cfile_cache_miss":30,"cfile_cache_miss_bytes":301508,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":52029,"dirs.run_cpu_time_us":1071,"dirs.run_wall_time_us":19931,"drs_written":1,"lbm_read_time_us":1719,"lbm_reads_lt_1ms":58,"lbm_write_time_us":1610,"lbm_writes_lt_1ms":28,"num_input_rowsets":2,"peak_mem_usage":90314,"rows_written":5850,"spinlock_wait_cycles":5376,"thread_start_us":49223,"threads_started":3}
I20260430 07:55:26.948180  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 53684 bytes on disk
I20260430 07:55:26.948974  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":97,"lbm_reads_lt_1ms":4}
I20260430 07:55:26.949626  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:27.013255  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.349s	user 0.219s	sys 0.035s Metrics: {"bytes_written":171573,"cfile_cache_hit":10,"cfile_cache_hit_bytes":108826,"cfile_cache_miss":28,"cfile_cache_miss_bytes":304272,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":5442,"dirs.run_cpu_time_us":2043,"dirs.run_wall_time_us":16073,"drs_written":1,"lbm_read_time_us":1739,"lbm_reads_lt_1ms":52,"lbm_write_time_us":1865,"lbm_writes_lt_1ms":28,"mutex_wait_us":1778,"num_input_rowsets":2,"peak_mem_usage":90641,"rows_written":5950,"thread_start_us":34665,"threads_started":2}
I20260430 07:55:27.014621  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 54582 bytes on disk
I20260430 07:55:27.016949  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":119,"lbm_reads_lt_1ms":4}
I20260430 07:55:27.017769  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:27.148279  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.479s	user 0.186s	sys 0.044s Metrics: {"bytes_written":93835,"cfile_init":1,"compiler_manager_pool.queue_time_us":223428,"compiler_manager_pool.run_cpu_time_us":12,"compiler_manager_pool.run_wall_time_us":12,"dirs.queue_time_us":74,"dirs.run_cpu_time_us":212,"dirs.run_wall_time_us":2015,"drs_written":1,"lbm_read_time_us":147,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1301,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":3300}
I20260430 07:55:27.149534  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014591
I20260430 07:55:27.154830  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.136s	user 0.106s	sys 0.020s Metrics: {"bytes_written":56563,"cfile_init":1,"dirs.queue_time_us":48,"dirs.run_cpu_time_us":197,"dirs.run_wall_time_us":1830,"drs_written":1,"lbm_read_time_us":90,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1757,"lbm_writes_lt_1ms":24,"peak_mem_usage":0,"rows_written":1850}
I20260430 07:55:27.156350  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014563
I20260430 07:55:27.159449  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.210s	user 0.102s	sys 0.012s Metrics: {"bytes_written":55157,"cfile_init":1,"dirs.queue_time_us":55,"dirs.run_cpu_time_us":167,"dirs.run_wall_time_us":1312,"drs_written":1,"lbm_read_time_us":81,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1121,"lbm_writes_lt_1ms":24,"peak_mem_usage":0,"rows_written":1800}
I20260430 07:55:27.160897  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 15783 bytes on disk
I20260430 07:55:27.161896  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":120,"lbm_reads_lt_1ms":4}
I20260430 07:55:27.163235  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014563
I20260430 07:55:27.743199  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.593s	user 0.282s	sys 0.024s Metrics: {"bytes_written":203369,"cfile_cache_hit":10,"cfile_cache_hit_bytes":128008,"cfile_cache_miss":30,"cfile_cache_miss_bytes":358804,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":1985,"dirs.run_cpu_time_us":212,"dirs.run_wall_time_us":2585,"drs_written":1,"lbm_read_time_us":4533,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":57,"lbm_write_time_us":9852,"lbm_writes_1-10_ms":1,"lbm_writes_lt_1ms":29,"num_input_rowsets":2,"peak_mem_usage":108256,"rows_written":7000,"thread_start_us":5083,"threads_started":1}
I20260430 07:55:27.746424  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:27.753319   420 tablet_copy-itest.cc:1499] Blocks diff: 10
I20260430 07:55:27.888787  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.732s	user 0.309s	sys 0.079s Metrics: {"bytes_written":224612,"cfile_cache_hit":10,"cfile_cache_hit_bytes":144469,"cfile_cache_miss":30,"cfile_cache_miss_bytes":402435,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":97917,"dirs.run_cpu_time_us":1612,"dirs.run_wall_time_us":14617,"drs_written":1,"lbm_read_time_us":3481,"lbm_reads_lt_1ms":54,"lbm_write_time_us":2196,"lbm_writes_lt_1ms":30,"mutex_wait_us":44,"num_input_rowsets":2,"peak_mem_usage":117599,"rows_written":7800,"thread_start_us":105963,"threads_started":3}
I20260430 07:55:27.890368  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 72440 bytes on disk
I20260430 07:55:27.892329  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":391,"lbm_reads_lt_1ms":4}
I20260430 07:55:27.893249  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:27.928689  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.765s	user 0.348s	sys 0.064s Metrics: {"bytes_written":220739,"cfile_cache_hit":10,"cfile_cache_hit_bytes":141629,"cfile_cache_miss":30,"cfile_cache_miss_bytes":394411,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":60937,"dirs.run_cpu_time_us":1311,"dirs.run_wall_time_us":14516,"drs_written":1,"lbm_read_time_us":2627,"lbm_reads_lt_1ms":50,"lbm_write_time_us":2809,"lbm_writes_lt_1ms":30,"mutex_wait_us":37,"num_input_rowsets":2,"peak_mem_usage":117009,"rows_written":7650,"spinlock_wait_cycles":10496,"thread_start_us":79617,"threads_started":3}
I20260430 07:55:27.930131  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 71075 bytes on disk
I20260430 07:55:27.941491  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":91,"lbm_reads_lt_1ms":4}
I20260430 07:55:27.951552  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:28.146045  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.399s	user 0.163s	sys 0.017s Metrics: {"bytes_written":75844,"cfile_init":1,"dirs.queue_time_us":13150,"dirs.run_cpu_time_us":277,"dirs.run_wall_time_us":2361,"drs_written":1,"lbm_read_time_us":98,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1203,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":2600,"spinlock_wait_cycles":255872}
I20260430 07:55:28.147265  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 87887 bytes on disk
I20260430 07:55:28.160804  3492 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":200,"lbm_reads_lt_1ms":8}
I20260430 07:55:28.162024  3601 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.013938
I20260430 07:55:28.172900   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3473
I20260430 07:55:28.175315  3595 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:28.178484  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.285s	user 0.118s	sys 0.023s Metrics: {"bytes_written":62972,"cfile_init":1,"dirs.queue_time_us":14937,"dirs.run_cpu_time_us":168,"dirs.run_wall_time_us":1744,"drs_written":1,"lbm_read_time_us":81,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1134,"lbm_writes_lt_1ms":24,"peak_mem_usage":0,"rows_written":2100}
I20260430 07:55:28.179992  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014275
I20260430 07:55:28.186082  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.234s	user 0.143s	sys 0.019s Metrics: {"bytes_written":71869,"cfile_init":1,"dirs.queue_time_us":86,"dirs.run_cpu_time_us":152,"dirs.run_wall_time_us":2432,"drs_written":1,"lbm_read_time_us":84,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1466,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":2450}
I20260430 07:55:28.187697  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 21767 bytes on disk
I20260430 07:55:28.189690  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.001s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":135,"lbm_reads_lt_1ms":4}
I20260430 07:55:28.190960  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014297
I20260430 07:55:28.411998  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.232s	user 0.219s	sys 0.009s Metrics: {"bytes_written":284212,"cfile_cache_hit":10,"cfile_cache_hit_bytes":184663,"cfile_cache_miss":32,"cfile_cache_miss_bytes":512883,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":1624,"dirs.run_cpu_time_us":546,"dirs.run_wall_time_us":3391,"drs_written":1,"lbm_read_time_us":1995,"lbm_reads_lt_1ms":56,"lbm_write_time_us":1953,"lbm_writes_lt_1ms":32,"mutex_wait_us":23,"num_input_rowsets":2,"peak_mem_usage":152667,"rows_written":9900,"spinlock_wait_cycles":1664,"thread_start_us":1331,"threads_started":3}
I20260430 07:55:28.413089  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:28.426440   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3473
I20260430 07:55:28.464085   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 4006
I20260430 07:55:28.465530  4069 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:28.472301  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.059s	user 0.049s	sys 0.008s Metrics: {"bytes_written":29310,"cfile_init":1,"dirs.queue_time_us":135,"dirs.run_cpu_time_us":234,"dirs.run_wall_time_us":937,"drs_written":1,"lbm_read_time_us":87,"lbm_reads_lt_1ms":4,"lbm_write_time_us":764,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":950}
I20260430 07:55:28.473453  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.009787
I20260430 07:55:28.557551  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.366s	user 0.335s	sys 0.023s Metrics: {"bytes_written":289688,"cfile_cache_hit":10,"cfile_cache_hit_bytes":188190,"cfile_cache_miss":34,"cfile_cache_miss_bytes":523476,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":2102,"dirs.run_cpu_time_us":409,"dirs.run_wall_time_us":2773,"drs_written":1,"lbm_read_time_us":2246,"lbm_reads_lt_1ms":54,"lbm_write_time_us":3100,"lbm_writes_lt_1ms":32,"num_input_rowsets":2,"peak_mem_usage":152227,"rows_written":10100,"thread_start_us":1409,"threads_started":4}
I20260430 07:55:28.558749  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 94752 bytes on disk
I20260430 07:55:28.559386  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":75,"lbm_reads_lt_1ms":4}
I20260430 07:55:28.559949  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:28.601816   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 4006
I20260430 07:55:28.609309  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.049s	user 0.028s	sys 0.018s Metrics: {"bytes_written":24186,"cfile_init":1,"dirs.queue_time_us":103,"dirs.run_cpu_time_us":211,"dirs.run_wall_time_us":1099,"drs_written":1,"lbm_read_time_us":124,"lbm_reads_lt_1ms":4,"lbm_write_time_us":837,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":750}
I20260430 07:55:28.610563  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 6520 bytes on disk
I20260430 07:55:28.611357  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":111,"lbm_reads_lt_1ms":4}
I20260430 07:55:28.612258  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.007690
I20260430 07:55:28.636443   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--webserver_interface=127.0.105.62
--webserver_port=42371
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000 with env {}
W20260430 07:55:28.662762  4261 heartbeater.cc:646] Failed to heartbeat to 127.0.105.62:33525 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.105.62:33525: connect: Connection refused (error 111)
I20260430 07:55:28.845930  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.372s	user 0.324s	sys 0.045s Metrics: {"bytes_written":313544,"cfile_cache_hit":10,"cfile_cache_hit_bytes":204125,"cfile_cache_miss":34,"cfile_cache_miss_bytes":565885,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":977,"dirs.run_cpu_time_us":296,"dirs.run_wall_time_us":1955,"drs_written":1,"lbm_read_time_us":2108,"lbm_reads_lt_1ms":62,"lbm_write_time_us":2239,"lbm_writes_lt_1ms":33,"num_input_rowsets":2,"peak_mem_usage":166655,"rows_written":10850,"spinlock_wait_cycles":2304,"thread_start_us":706,"threads_started":2}
I20260430 07:55:28.847127  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 101929 bytes on disk
I20260430 07:55:28.848104  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":89,"lbm_reads_lt_1ms":4}
I20260430 07:55:28.940815  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.328s	user 0.303s	sys 0.024s Metrics: {"bytes_written":313544,"cfile_cache_hit":10,"cfile_cache_hit_bytes":204325,"cfile_cache_miss":34,"cfile_cache_miss_bytes":566735,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":1270,"dirs.run_cpu_time_us":229,"dirs.run_wall_time_us":2225,"drs_written":1,"lbm_read_time_us":1903,"lbm_reads_lt_1ms":54,"lbm_write_time_us":2262,"lbm_writes_lt_1ms":33,"mutex_wait_us":87,"num_input_rowsets":2,"peak_mem_usage":166855,"rows_written":10850,"thread_start_us":523,"threads_started":1}
I20260430 07:55:28.941998  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 101929 bytes on disk
I20260430 07:55:28.942745  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":86,"lbm_reads_lt_1ms":4}
W20260430 07:55:29.059507  4357 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:29.059890  4357 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:29.059968  4357 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:29.069751  4357 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:55:29.069916  4357 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:29.069980  4357 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:55:29.070026  4357 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:55:29.082336  4357 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=42371
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:29.083986  4357 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:29.085976  4357 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:29.097435  4365 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:29.097420  4366 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:29.097646  4368 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:29.100108  4357 server_base.cc:1061] running on GCE node
I20260430 07:55:29.101372  4357 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:29.103250  4357 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:29.104584  4357 hybrid_clock.cc:648] HybridClock initialized: now 1777535729104523 us; error 46 us; skew 500 ppm
I20260430 07:55:29.105104  4357 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:29.107767  4357 webserver.cc:492] Webserver started at http://127.0.105.62:42371/ using document root <none> and password file <none>
I20260430 07:55:29.108656  4357 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:29.108762  4357 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:29.115391  4357 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.004s	sys 0.000s
I20260430 07:55:29.119490  4374 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:29.121090  4357 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.000s
I20260430 07:55:29.121358  4357 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "3a75644a42d64405ae1dd61a955fbcc9"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:29.122313  4357 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:29.141574  4357 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:29.142561  4357 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:29.142985  4357 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:29.167618  4357 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:33525
I20260430 07:55:29.167717  4425 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:33525 every 8 connection(s)
I20260430 07:55:29.169486  4357 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:55:29.175576   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 4357
I20260430 07:55:29.178658  4426 sys_catalog.cc:263] Verifying existing consensus state
I20260430 07:55:29.185786  4426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap starting.
I20260430 07:55:29.209601  4426 log.cc:826] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:29.222159  4426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:29.222762  4426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap complete.
I20260430 07:55:29.229753  4426 raft_consensus.cc:359] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:29.230510  4426 raft_consensus.cc:740] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Initialized, Role: FOLLOWER
I20260430 07:55:29.231141  4426 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:29.231345  4426 raft_consensus.cc:399] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:55:29.231436  4426 raft_consensus.cc:493] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:55:29.231577  4426 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:55:29.235277  4426 raft_consensus.cc:515] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:29.235798  4426 leader_election.cc:304] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3a75644a42d64405ae1dd61a955fbcc9; no voters: 
I20260430 07:55:29.236471  4426 leader_election.cc:290] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 3 election: Requested vote from peers 
I20260430 07:55:29.236709  4431 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Leader election won for term 3
I20260430 07:55:29.238420  4431 raft_consensus.cc:697] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 LEADER]: Becoming Leader. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Running, Role: LEADER
I20260430 07:55:29.239020  4431 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:29.239722  4426 sys_catalog.cc:565] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:55:29.241982  4433 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3a75644a42d64405ae1dd61a955fbcc9. Latest consensus state: current_term: 3 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:29.242502  4433 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:29.243485  4437 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:55:29.243290  4432 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:29.243855  4432 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:29.250819  4437 catalog_manager.cc:679] Loaded metadata for table table_a [id=ff76305e07ff4c568227a9fd62f5f0d9]
I20260430 07:55:29.253513  4437 tablet_loader.cc:96] loaded metadata for tablet 0d01e5768e6f435695871abd9deaee86 (table table_a [id=ff76305e07ff4c568227a9fd62f5f0d9])
I20260430 07:55:29.254221  4437 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:55:29.256501  4437 catalog_manager.cc:1269] Loaded cluster ID: c893b49e53f64173a896b4b2dba678c2
I20260430 07:55:29.256640  4437 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:55:29.261569  4437 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:55:29.263605  4437 catalog_manager.cc:6055] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Loaded TSK: 0
I20260430 07:55:29.265599  4437 catalog_manager.cc:1524] Initializing in-progress tserver states...
I20260430 07:55:29.443801  4391 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" instance_seqno: 1777535719571248) as {username='slave'} at 127.0.105.2:40461; Asking this server to re-register.
I20260430 07:55:29.443953  4390 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" instance_seqno: 1777535720162926) as {username='slave'} at 127.0.105.3:41535; Asking this server to re-register.
I20260430 07:55:29.444870  3731 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:29.444993  3862 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:29.445273  3731 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:29.445340  3862 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:29.447366  4390 ts_manager.cc:194] Registered new tserver with Master: ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:29.447569  4391 ts_manager.cc:194] Registered new tserver with Master: 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:29.628762  4452 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 0696ac6914f940f2bcdc99c5d5c3d0e5)
I20260430 07:55:29.629276  4453 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 0696ac6914f940f2bcdc99c5d5c3d0e5)
I20260430 07:55:29.629249  4452 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.629454  4453 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.631273  4453 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:29.631304  4452 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
W20260430 07:55:29.632670  3621 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111)
W20260430 07:55:29.633836  3752 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111)
W20260430 07:55:29.644845  3621 leader_election.cc:336] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111)
W20260430 07:55:29.648022  3752 leader_election.cc:336] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111)
I20260430 07:55:29.649648  3685 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" candidate_term: 2 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779" is_pre_election: true
I20260430 07:55:29.649931  3685 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ea6bd19fddfe4b988f4682a0bdec2adc in term 1.
I20260430 07:55:29.650065  3817 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 2 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:29.650494  3817 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 1.
I20260430 07:55:29.650700  3752 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 0696ac6914f940f2bcdc99c5d5c3d0e5
I20260430 07:55:29.651194  4453 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Leader pre-election won for term 2
I20260430 07:55:29.651434  4453 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Starting leader election (detected failure of leader 0696ac6914f940f2bcdc99c5d5c3d0e5)
I20260430 07:55:29.651401  3621 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 0696ac6914f940f2bcdc99c5d5c3d0e5
I20260430 07:55:29.651602  4453 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:55:29.651907  4452 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20260430 07:55:29.652046  4452 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Starting leader election (detected failure of leader 0696ac6914f940f2bcdc99c5d5c3d0e5)
I20260430 07:55:29.652375  4452 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:55:29.655665  4453 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.655755  4452 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.656584  4452 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 election: Requested vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:29.656613  4453 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [CANDIDATE]: Term 2 election: Requested vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:29.657748  3685 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" candidate_term: 2 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779"
I20260430 07:55:29.657711  3817 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 2 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
I20260430 07:55:29.658003  3685 raft_consensus.cc:2393] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate ea6bd19fddfe4b988f4682a0bdec2adc in current term 2: Already voted for candidate 9b6542a54f61418a894d790b5e1aa779 in this term.
I20260430 07:55:29.658043  3817 raft_consensus.cc:2393] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 9b6542a54f61418a894d790b5e1aa779 in current term 2: Already voted for candidate ea6bd19fddfe4b988f4682a0bdec2adc in this term.
W20260430 07:55:29.659235  3752 leader_election.cc:336] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111)
I20260430 07:55:29.659394  3752 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 0696ac6914f940f2bcdc99c5d5c3d0e5, 9b6542a54f61418a894d790b5e1aa779
W20260430 07:55:29.660178  3621 leader_election.cc:336] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111)
I20260430 07:55:29.660269  4453 raft_consensus.cc:2749] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20260430 07:55:29.660349  3621 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779; no voters: 0696ac6914f940f2bcdc99c5d5c3d0e5, ea6bd19fddfe4b988f4682a0bdec2adc
I20260430 07:55:29.660631  4452 raft_consensus.cc:2749] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20260430 07:55:29.674054  4261 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:29.675961  4390 master_service.cc:438] Got heartbeat from unknown tserver (permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" instance_seqno: 1777535723560032) as {username='slave'} at 127.0.105.4:57039; Asking this server to re-register.
I20260430 07:55:29.676832  4261 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:29.677166  4261 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:29.678709  4390 ts_manager.cc:194] Registered new tserver with Master: 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005)
I20260430 07:55:29.749594  4390 catalog_manager.cc:2257] Servicing CreateTable request from {username='slave'} at 127.0.0.1:36596:
name: "table_b"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20260430 07:55:29.751756  4390 catalog_manager.cc:7033] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table table_b in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20260430 07:55:29.780232  3666 tablet_service.cc:1511] Processing CreateTablet for tablet 930fc8ad15e14df2af31bad7407f95ad (DEFAULT_TABLE table=table_b [id=ef3c0c5faf52454d8e8704dce8f9f4d0]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:55:29.780746  3666 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 930fc8ad15e14df2af31bad7407f95ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:29.782742  3797 tablet_service.cc:1511] Processing CreateTablet for tablet 930fc8ad15e14df2af31bad7407f95ad (DEFAULT_TABLE table=table_b [id=ef3c0c5faf52454d8e8704dce8f9f4d0]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:55:29.783195  3797 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 930fc8ad15e14df2af31bad7407f95ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:29.782714  4196 tablet_service.cc:1511] Processing CreateTablet for tablet 930fc8ad15e14df2af31bad7407f95ad (DEFAULT_TABLE table=table_b [id=ef3c0c5faf52454d8e8704dce8f9f4d0]), partition=RANGE (key) PARTITION UNBOUNDED
I20260430 07:55:29.784206  4196 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 930fc8ad15e14df2af31bad7407f95ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:29.787778  4473 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Bootstrap starting.
I20260430 07:55:29.790369  4473 tablet_bootstrap.cc:654] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:29.791494  4474 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap starting.
I20260430 07:55:29.794595  4474 tablet_bootstrap.cc:654] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:29.802788  4473 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: No bootstrap required, opened a new log
I20260430 07:55:29.802986  4473 ts_tablet_manager.cc:1403] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Time spent bootstrapping tablet: real 0.015s	user 0.010s	sys 0.004s
I20260430 07:55:29.804152  4473 raft_consensus.cc:359] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.804435  4473 raft_consensus.cc:385] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:29.804531  4473 raft_consensus.cc:740] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Initialized, Role: FOLLOWER
I20260430 07:55:29.804786  4473 consensus_queue.cc:260] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.805230  4477 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Bootstrap starting.
I20260430 07:55:29.805373  4473 ts_tablet_manager.cc:1434] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Time spent starting tablet: real 0.002s	user 0.002s	sys 0.001s
I20260430 07:55:29.806759  4474 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: No bootstrap required, opened a new log
I20260430 07:55:29.806946  4474 ts_tablet_manager.cc:1403] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent bootstrapping tablet: real 0.016s	user 0.003s	sys 0.007s
I20260430 07:55:29.808602  4474 raft_consensus.cc:359] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.808830  4474 raft_consensus.cc:385] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:29.808930  4474 raft_consensus.cc:740] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea6bd19fddfe4b988f4682a0bdec2adc, State: Initialized, Role: FOLLOWER
I20260430 07:55:29.809214  4474 consensus_queue.cc:260] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.809823  4474 ts_tablet_manager.cc:1434] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent starting tablet: real 0.003s	user 0.003s	sys 0.000s
I20260430 07:55:29.811240  4477 tablet_bootstrap.cc:654] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Neither blocks nor log segments found. Creating new log.
I20260430 07:55:29.813066  4477 log.cc:826] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:29.817315  4477 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: No bootstrap required, opened a new log
I20260430 07:55:29.817687  4477 ts_tablet_manager.cc:1403] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Time spent bootstrapping tablet: real 0.013s	user 0.011s	sys 0.000s
I20260430 07:55:29.824460  4477 raft_consensus.cc:359] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.824956  4477 raft_consensus.cc:385] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20260430 07:55:29.825088  4477 raft_consensus.cc:740] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2e401b3aecfd46378718b182a4bec89f, State: Initialized, Role: FOLLOWER
I20260430 07:55:29.825965  4477 consensus_queue.cc:260] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.827901  4477 ts_tablet_manager.cc:1434] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Time spent starting tablet: real 0.010s	user 0.006s	sys 0.004s
W20260430 07:55:29.833065  4262 tablet.cc:2404] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20260430 07:55:29.860867  4479 raft_consensus.cc:493] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:55:29.861374  4479 raft_consensus.cc:515] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.863446  4479 leader_election.cc:290] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:29.871937  3685 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "2e401b3aecfd46378718b182a4bec89f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779" is_pre_election: true
I20260430 07:55:29.872300  3685 raft_consensus.cc:2468] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2e401b3aecfd46378718b182a4bec89f in term 0.
I20260430 07:55:29.872387  3817 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "2e401b3aecfd46378718b182a4bec89f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:29.872632  3817 raft_consensus.cc:2468] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2e401b3aecfd46378718b182a4bec89f in term 0.
I20260430 07:55:29.873165  4151 leader_election.cc:304] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2e401b3aecfd46378718b182a4bec89f, 9b6542a54f61418a894d790b5e1aa779; no voters: 
I20260430 07:55:29.873927  4479 raft_consensus.cc:2804] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Leader pre-election won for term 1
I20260430 07:55:29.874065  4479 raft_consensus.cc:493] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:55:29.874258  4479 raft_consensus.cc:3060] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:29.876469  4479 raft_consensus.cc:515] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.877442  4479 leader_election.cc:290] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [CANDIDATE]: Term 1 election: Requested vote from peers 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:29.878036  3817 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "2e401b3aecfd46378718b182a4bec89f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
I20260430 07:55:29.877959  3685 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "2e401b3aecfd46378718b182a4bec89f" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779"
I20260430 07:55:29.878391  3685 raft_consensus.cc:3060] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:29.878475  3817 raft_consensus.cc:3060] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 0 FOLLOWER]: Advancing to term 1
I20260430 07:55:29.880439  3817 raft_consensus.cc:2468] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2e401b3aecfd46378718b182a4bec89f in term 1.
I20260430 07:55:29.880489  3685 raft_consensus.cc:2468] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2e401b3aecfd46378718b182a4bec89f in term 1.
I20260430 07:55:29.881172  4151 leader_election.cc:304] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2e401b3aecfd46378718b182a4bec89f, 9b6542a54f61418a894d790b5e1aa779; no voters: 
I20260430 07:55:29.881601  4479 raft_consensus.cc:2804] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 1 FOLLOWER]: Leader election won for term 1
I20260430 07:55:29.882114  4479 raft_consensus.cc:697] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 1 LEADER]: Becoming Leader. State: Replica: 2e401b3aecfd46378718b182a4bec89f, State: Running, Role: LEADER
I20260430 07:55:29.882702  4479 consensus_queue.cc:237] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:29.888759  4390 catalog_manager.cc:5671] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f reported cstate change: term changed from 0 to 1, leader changed from <none> to 2e401b3aecfd46378718b182a4bec89f (127.0.105.4). New cstate: current_term: 1 leader_uuid: "2e401b3aecfd46378718b182a4bec89f" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } health_report { overall_health: UNKNOWN } } }
I20260430 07:55:29.914543   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:36583
--local_ip_for_outbound_sockets=127.0.105.1
--tserver_master_addrs=127.0.105.62:33525
--webserver_port=42573
--webserver_interface=127.0.105.1
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0 with env {}
W20260430 07:55:30.346431  4483 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:30.346822  4483 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:30.346943  4483 flags.cc:432] Enabled unsafe flag: --tablet_copy_early_session_timeout_prob=1
W20260430 07:55:30.347045  4483 flags.cc:432] Enabled unsafe flag: --never_fsync=true
I20260430 07:55:30.355549  4479 consensus_queue.cc:1048] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [LEADER]: Connected to new peer: Peer: permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20260430 07:55:30.364815  4483 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:30.365033  4483 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:30.365201  4483 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:55:30.365201  4481 consensus_queue.cc:1048] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20260430 07:55:30.383733  4483 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:36583
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=42573
--flush_threshold_mb=0
--tablet_copy_early_session_timeout_prob=1
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:30.386035  4483 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:30.388213  4483 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:30.408636  4496 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:30.410212  4497 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:30.412914  4499 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:30.412914  4483 server_base.cc:1061] running on GCE node
I20260430 07:55:30.414187  4483 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:30.416198  4483 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:30.417479  4483 hybrid_clock.cc:648] HybridClock initialized: now 1777535730417371 us; error 102 us; skew 500 ppm
I20260430 07:55:30.417980  4483 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:30.420763  4483 webserver.cc:492] Webserver started at http://127.0.105.1:42573/ using document root <none> and password file <none>
I20260430 07:55:30.421867  4483 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:30.422045  4483 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:30.429576  4483 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.007s	sys 0.000s
I20260430 07:55:30.444497  4509 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:30.447142  4483 fs_manager.cc:730] Time spent opening block manager: real 0.016s	user 0.005s	sys 0.000s
I20260430 07:55:30.447399  4483 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:30.448252  4483 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 10
Total live bytes: 367100
Total live bytes (after alignment): 389120
Total number of LBM containers: 6 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:30.468099  4483 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:30.469161  4483 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:30.469607  4483 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:30.471022  4483 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:30.473964  4516 ts_tablet_manager.cc:548] Loading tablet metadata (0/1 complete)
I20260430 07:55:30.481150  4483 ts_tablet_manager.cc:585] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20260430 07:55:30.481400  4483 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.008s	user 0.000s	sys 0.000s
I20260430 07:55:30.481592  4483 ts_tablet_manager.cc:600] Registering tablets (0/1 complete)
I20260430 07:55:30.485584  4483 ts_tablet_manager.cc:616] Registered 1 tablets
I20260430 07:55:30.485842  4483 ts_tablet_manager.cc:595] Time spent register tablets: real 0.004s	user 0.002s	sys 0.002s
I20260430 07:55:30.486649  4516 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap starting.
I20260430 07:55:30.539072  4483 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:36583
I20260430 07:55:30.539103  4622 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:36583 every 8 connection(s)
I20260430 07:55:30.540868  4483 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:55:30.546087   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 4483
I20260430 07:55:30.570447  4623 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:30.570870  4623 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:30.572049  4623 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:30.579718  4390 ts_manager.cc:194] Registered new tserver with Master: 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583)
I20260430 07:55:30.583315  4390 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:35423
I20260430 07:55:30.638039   420 tablet_copy-itest.cc:1499] Blocks diff: 0
I20260430 07:55:30.640321  4516 log.cc:826] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:30.754105  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:30.783607  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:30.799553  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.045s	user 0.018s	sys 0.005s Metrics: {"bytes_written":7292,"cfile_init":1,"dirs.queue_time_us":9434,"dirs.run_cpu_time_us":253,"dirs.run_wall_time_us":5825,"drs_written":1,"lbm_read_time_us":307,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1221,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":100,"spinlock_wait_cycles":319360,"thread_start_us":10078,"threads_started":1}
I20260430 07:55:30.801056  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 918 bytes on disk
I20260430 07:55:30.801990  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":125,"lbm_reads_lt_1ms":4}
I20260430 07:55:30.802932  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:30.819921  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:30.832269  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.029s	user 0.021s	sys 0.003s Metrics: {"bytes_written":7292,"cfile_init":1,"dirs.queue_time_us":62,"dirs.run_cpu_time_us":210,"dirs.run_wall_time_us":1645,"drs_written":1,"lbm_read_time_us":115,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1264,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":100}
I20260430 07:55:30.834425  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.005535
I20260430 07:55:30.858424  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.074s	user 0.017s	sys 0.000s Metrics: {"bytes_written":5984,"cfile_init":1,"dirs.queue_time_us":24669,"dirs.run_cpu_time_us":222,"dirs.run_wall_time_us":6847,"drs_written":1,"lbm_read_time_us":74,"lbm_reads_lt_1ms":4,"lbm_write_time_us":791,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":50,"spinlock_wait_cycles":355072,"thread_start_us":24845,"threads_started":1}
I20260430 07:55:30.860534  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:30.921085  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.046s	user 0.020s	sys 0.008s Metrics: {"bytes_written":8586,"cfile_init":1,"dirs.queue_time_us":58,"dirs.run_cpu_time_us":193,"dirs.run_wall_time_us":1492,"drs_written":1,"lbm_read_time_us":344,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1124,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":150}
I20260430 07:55:30.922657  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 1805 bytes on disk
I20260430 07:55:30.923748  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":220,"lbm_reads_lt_1ms":8}
I20260430 07:55:30.925613  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=0.988205
I20260430 07:55:30.998324  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.163s	user 0.059s	sys 0.055s Metrics: {"bytes_written":9877,"cfile_cache_hit":10,"cfile_cache_hit_bytes":3860,"cfile_cache_miss":26,"cfile_cache_miss_bytes":10942,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":5506,"dirs.run_cpu_time_us":1064,"dirs.run_wall_time_us":9892,"drs_written":1,"lbm_read_time_us":1507,"lbm_reads_lt_1ms":50,"lbm_write_time_us":1809,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":4614,"rows_written":200,"thread_start_us":82281,"threads_started":2}
I20260430 07:55:30.999536  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.077288  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.065s	user 0.018s	sys 0.004s Metrics: {"bytes_written":8587,"cfile_init":1,"dirs.queue_time_us":53,"dirs.run_cpu_time_us":205,"dirs.run_wall_time_us":6535,"drs_written":1,"lbm_read_time_us":71,"lbm_reads_lt_1ms":4,"lbm_write_time_us":934,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":150,"spinlock_wait_cycles":7276288}
I20260430 07:55:31.078781  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.013097
I20260430 07:55:31.103552  4516 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap replayed 1/1 log segments. Stats: ops{read=218 overwritten=0 applied=218 ignored=192} inserts{seen=1250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:31.105752  4516 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap complete.
I20260430 07:55:31.107607  4516 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Time spent bootstrapping tablet: real 0.622s	user 0.491s	sys 0.095s
I20260430 07:55:31.115151  4516 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:31.116283  4516 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0696ac6914f940f2bcdc99c5d5c3d0e5, State: Initialized, Role: FOLLOWER
I20260430 07:55:31.117358  4516 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 218, Last appended: 1.218, Last appended by leader: 218, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:31.132387  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.311s	user 0.037s	sys 0.012s Metrics: {"bytes_written":5984,"cfile_init":1,"compiler_manager_pool.queue_time_us":34123,"dirs.queue_time_us":11595,"dirs.run_cpu_time_us":255,"dirs.run_wall_time_us":4485,"drs_written":1,"lbm_read_time_us":103,"lbm_reads_lt_1ms":4,"lbm_write_time_us":868,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":50,"spinlock_wait_cycles":251648,"thread_start_us":50537,"threads_started":2}
I20260430 07:55:31.178071  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.220321  4623 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:31.220494  4516 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Time spent starting tablet: real 0.113s	user 0.037s	sys 0.070s
I20260430 07:55:31.223822  4624 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.013938
I20260430 07:55:31.227759  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.301s	user 0.046s	sys 0.021s Metrics: {"bytes_written":9877,"cfile_cache_hit":10,"cfile_cache_hit_bytes":3865,"cfile_cache_miss":26,"cfile_cache_miss_bytes":10857,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":7288,"dirs.run_cpu_time_us":3688,"dirs.run_wall_time_us":51191,"drs_written":1,"lbm_read_time_us":1301,"lbm_reads_lt_1ms":46,"lbm_write_time_us":915,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":5933,"rows_written":200,"thread_start_us":86828,"threads_started":2}
I20260430 07:55:31.229260  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.283427  4658 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:55:31.283675  4658 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:31.294725  4658 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:31.302675  3816 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 3 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:31.302978  3816 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 2.
I20260430 07:55:31.303539  3621 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 
I20260430 07:55:31.303864  4658 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20260430 07:55:31.304028  4658 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:55:31.304155  4658 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:55:31.307384  4658 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:31.310127  4658 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 3 election: Requested vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:31.311379  3816 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 3 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
I20260430 07:55:31.311662  3816 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Advancing to term 3
I20260430 07:55:31.318116  3816 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 3.
I20260430 07:55:31.318956  3621 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 
I20260430 07:55:31.320906  4658 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Leader election won for term 3
I20260430 07:55:31.330570  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.101s	user 0.058s	sys 0.003s Metrics: {"bytes_written":22897,"cfile_init":1,"dirs.queue_time_us":65,"dirs.run_cpu_time_us":261,"dirs.run_wall_time_us":2152,"drs_written":1,"lbm_read_time_us":134,"lbm_reads_lt_1ms":4,"lbm_write_time_us":965,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":700}
I20260430 07:55:31.332644  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.010118
I20260430 07:55:31.339501  4658 raft_consensus.cc:697] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 LEADER]: Becoming Leader. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Running, Role: LEADER
I20260430 07:55:31.340149  4658 consensus_queue.cc:237] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 218, Committed index: 218, Last appended: 1.218, Last appended by leader: 218, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:31.354255  4388 catalog_manager.cc:5671] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 reported cstate change: term changed from 1 to 3, leader changed from 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1) to 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2). New cstate: current_term: 3 leader_uuid: "9b6542a54f61418a894d790b5e1aa779" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } health_report { overall_health: UNKNOWN } } }
I20260430 07:55:31.377521  4578 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 3 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" is_pre_election: true
I20260430 07:55:31.378338  4578 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 1.
I20260430 07:55:31.382444  4577 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 3 candidate_status { last_received { term: 1 index: 218 } } ignore_live_leader: false dest_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
I20260430 07:55:31.382807  4577 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 1 FOLLOWER]: Advancing to term 3
I20260430 07:55:31.393692  4577 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 3.
I20260430 07:55:31.414740  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.322s	user 0.040s	sys 0.004s Metrics: {"bytes_written":13805,"cfile_cache_hit":10,"cfile_cache_hit_bytes":6600,"cfile_cache_miss":26,"cfile_cache_miss_bytes":18764,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":9912,"dirs.run_cpu_time_us":900,"dirs.run_wall_time_us":12495,"drs_written":1,"lbm_read_time_us":86349,"lbm_reads_10-100_ms":1,"lbm_reads_lt_1ms":53,"lbm_write_time_us":1079,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":9568,"rows_written":350,"thread_start_us":16574,"threads_started":1}
I20260430 07:55:31.421989  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.450165  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.117s	user 0.062s	sys 0.004s Metrics: {"bytes_written":27988,"cfile_cache_hit":10,"cfile_cache_hit_bytes":16636,"cfile_cache_miss":26,"cfile_cache_miss_bytes":46800,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":3147,"dirs.run_cpu_time_us":723,"dirs.run_wall_time_us":12182,"drs_written":1,"lbm_read_time_us":1287,"lbm_reads_lt_1ms":54,"lbm_write_time_us":1277,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":18962,"rows_written":900,"thread_start_us":6874,"threads_started":1}
I20260430 07:55:31.452818  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.533648  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.080s	user 0.033s	sys 0.004s Metrics: {"bytes_written":12534,"cfile_init":1,"dirs.queue_time_us":9609,"dirs.run_cpu_time_us":288,"dirs.run_wall_time_us":2807,"drs_written":1,"lbm_read_time_us":223,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1247,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":300}
I20260430 07:55:31.539108  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014577
I20260430 07:55:31.527611  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.342s	user 0.057s	sys 0.014s Metrics: {"bytes_written":22913,"cfile_init":1,"dirs.queue_time_us":1577,"dirs.run_cpu_time_us":345,"dirs.run_wall_time_us":6436,"drs_written":1,"lbm_read_time_us":75,"lbm_reads_lt_1ms":4,"lbm_write_time_us":2083,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":700}
I20260430 07:55:31.553097  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.614564  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.187s	user 0.068s	sys 0.018s Metrics: {"bytes_written":26740,"cfile_init":1,"dirs.queue_time_us":1023,"dirs.run_cpu_time_us":241,"dirs.run_wall_time_us":9594,"drs_written":1,"lbm_read_time_us":94,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1353,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":850}
I20260430 07:55:31.616235  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.012901
I20260430 07:55:31.643998   420 tablet_copy-itest.cc:1499] Blocks diff: 10
I20260430 07:55:31.799242  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.258s	user 0.084s	sys 0.005s Metrics: {"bytes_written":35732,"cfile_cache_hit":10,"cfile_cache_hit_bytes":22112,"cfile_cache_miss":26,"cfile_cache_miss_bytes":62124,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":9280,"dirs.run_cpu_time_us":690,"dirs.run_wall_time_us":4604,"drs_written":1,"lbm_read_time_us":1377,"lbm_reads_lt_1ms":54,"lbm_write_time_us":1651,"lbm_writes_lt_1ms":23,"mutex_wait_us":1188,"num_input_rowsets":2,"peak_mem_usage":23196,"rows_written":1200,"spinlock_wait_cycles":2304}
I20260430 07:55:31.801117  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.838198  3816 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 1 index: 218. Preceding OpId from leader: term: 3 index: 219. (index mismatch)
I20260430 07:55:31.839545  4658 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 219, Last known committed idx: 218, Time since last communication: 0.000s
I20260430 07:55:31.844223  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.224s	user 0.102s	sys 0.003s Metrics: {"bytes_written":35732,"cfile_cache_hit":10,"cfile_cache_hit_bytes":22117,"cfile_cache_miss":26,"cfile_cache_miss_bytes":62105,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":660,"dirs.run_cpu_time_us":1341,"dirs.run_wall_time_us":9091,"drs_written":1,"lbm_read_time_us":10886,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":53,"lbm_write_time_us":1024,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":19901,"rows_written":1200,"thread_start_us":11893,"threads_started":1}
I20260430 07:55:31.845487  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:31.925486  4577 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 1 index: 218. Preceding OpId from leader: term: 3 index: 219. (index mismatch)
I20260430 07:55:31.935470  4658 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 219, Last known committed idx: 218, Time since last communication: 0.000s
I20260430 07:55:31.958132  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.405s	user 0.074s	sys 0.012s Metrics: {"bytes_written":25398,"cfile_init":1,"dirs.queue_time_us":82,"dirs.run_cpu_time_us":225,"dirs.run_wall_time_us":2572,"drs_written":1,"lbm_read_time_us":80,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1024,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":800}
I20260430 07:55:31.985617  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 13613 bytes on disk
I20260430 07:55:32.003024  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.016s	user 0.003s	sys 0.000s Metrics: {"cfile_init":3,"lbm_read_time_us":356,"lbm_reads_lt_1ms":12}
I20260430 07:55:32.004280  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.980623
I20260430 07:55:32.056281  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.247s	user 0.055s	sys 0.012s Metrics: {"bytes_written":21574,"cfile_init":1,"dirs.queue_time_us":61,"dirs.run_cpu_time_us":236,"dirs.run_wall_time_us":2558,"drs_written":1,"lbm_read_time_us":2165,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":3,"lbm_write_time_us":1074,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":650}
I20260430 07:55:32.059454  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 16191 bytes on disk
I20260430 07:55:32.064455  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":154,"lbm_reads_lt_1ms":8}
I20260430 07:55:32.065500  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.013271
I20260430 07:55:32.104432  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.259s	user 0.044s	sys 0.005s Metrics: {"bytes_written":20269,"cfile_init":1,"dirs.queue_time_us":69,"dirs.run_cpu_time_us":242,"dirs.run_wall_time_us":1952,"drs_written":1,"lbm_read_time_us":88,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1050,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":600,"spinlock_wait_cycles":73088}
I20260430 07:55:32.106411  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 15733 bytes on disk
I20260430 07:55:32.107327  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":167,"lbm_reads_lt_1ms":8}
I20260430 07:55:32.115366  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.013255
I20260430 07:55:32.229841  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.164s	user 0.087s	sys 0.012s Metrics: {"bytes_written":56587,"cfile_cache_hit":10,"cfile_cache_hit_bytes":33977,"cfile_cache_miss":26,"cfile_cache_miss_bytes":94971,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":3510,"dirs.run_cpu_time_us":1322,"dirs.run_wall_time_us":12367,"drs_written":1,"lbm_read_time_us":4393,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":45,"lbm_write_time_us":11741,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":23,"num_input_rowsets":2,"peak_mem_usage":30811,"rows_written":1850,"thread_start_us":17911,"threads_started":1}
I20260430 07:55:32.231612  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:32.367331  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.356s	user 0.118s	sys 0.024s Metrics: {"bytes_written":44678,"cfile_cache_hit":15,"cfile_cache_hit_bytes":28607,"cfile_cache_miss":39,"cfile_cache_miss_bytes":80157,"cfile_init":7,"delta_iterators_relevant":6,"dirs.queue_time_us":8358,"dirs.run_cpu_time_us":1530,"dirs.run_wall_time_us":18523,"drs_written":1,"lbm_read_time_us":26350,"lbm_reads_1-10_ms":2,"lbm_reads_10-100_ms":1,"lbm_reads_lt_1ms":64,"lbm_write_time_us":20317,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":22,"num_input_rowsets":3,"peak_mem_usage":29775,"rows_written":1550,"thread_start_us":16838,"threads_started":2}
I20260430 07:55:32.371554  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:32.373855  4514 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 1.148s	user 0.511s	sys 0.084s Metrics: {"bytes_written":276591,"cfile_cache_hit":10,"cfile_cache_hit_bytes":178410,"cfile_cache_miss":34,"cfile_cache_miss_bytes":496554,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":172728,"dirs.run_cpu_time_us":2070,"dirs.run_wall_time_us":25056,"drs_written":1,"lbm_read_time_us":3920,"lbm_reads_lt_1ms":62,"lbm_write_time_us":2765,"lbm_writes_lt_1ms":32,"mutex_wait_us":505,"num_input_rowsets":2,"peak_mem_usage":147897,"rows_written":9600,"thread_start_us":196158,"threads_started":3}
I20260430 07:55:32.377274  4624 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling FlushMRSOp(0d01e5768e6f435695871abd9deaee86): perf score=1.000000
I20260430 07:55:32.389828  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.158s	user 0.042s	sys 0.000s Metrics: {"bytes_written":20263,"cfile_init":1,"dirs.queue_time_us":59,"dirs.run_cpu_time_us":208,"dirs.run_wall_time_us":2871,"drs_written":1,"lbm_read_time_us":84,"lbm_reads_lt_1ms":4,"lbm_write_time_us":21186,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":22,"peak_mem_usage":0,"rows_written":600}
I20260430 07:55:32.391376  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014600
I20260430 07:55:32.526772  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.409s	user 0.094s	sys 0.009s Metrics: {"bytes_written":55175,"cfile_cache_hit":10,"cfile_cache_hit_bytes":33062,"cfile_cache_miss":26,"cfile_cache_miss_bytes":92360,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":27463,"dirs.run_cpu_time_us":1111,"dirs.run_wall_time_us":9965,"drs_written":1,"lbm_read_time_us":1214,"lbm_reads_lt_1ms":46,"lbm_write_time_us":1358,"lbm_writes_lt_1ms":24,"num_input_rowsets":2,"peak_mem_usage":31746,"rows_written":1800,"thread_start_us":26422,"threads_started":2}
I20260430 07:55:32.528187  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:32.646791  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.273s	user 0.089s	sys 0.013s Metrics: {"bytes_written":34527,"cfile_init":1,"dirs.queue_time_us":57,"dirs.run_cpu_time_us":233,"dirs.run_wall_time_us":3175,"drs_written":1,"lbm_read_time_us":213,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1056,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1150}
I20260430 07:55:32.648099  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 23505 bytes on disk
I20260430 07:55:32.649063  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":179,"lbm_reads_lt_1ms":8}
I20260430 07:55:32.650043  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014331
W20260430 07:55:32.652078  3621 outbound_call.cc:321] RPC callback for RPC call kudu.consensus.ConsensusService.UpdateConsensus -> {remote=127.0.105.1:36583, user_credentials={real_user=slave}} blocked reactor thread for 72126.5us
I20260430 07:55:32.660331   420 tablet_copy-itest.cc:1499] Blocks diff: 5
I20260430 07:55:32.724820  4514 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: FlushMRSOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.347s	user 0.081s	sys 0.022s Metrics: {"bytes_written":37125,"cfile_init":1,"compiler_manager_pool.queue_time_us":4043,"dirs.queue_time_us":58,"dirs.run_cpu_time_us":157,"dirs.run_wall_time_us":1033,"drs_written":1,"lbm_read_time_us":99,"lbm_reads_lt_1ms":4,"lbm_write_time_us":2057,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1250,"thread_start_us":8030,"threads_started":1}
I20260430 07:55:32.733871  4624 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86): perf score=1.014190
I20260430 07:55:32.905918  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.371s	user 0.075s	sys 0.004s Metrics: {"bytes_written":34546,"cfile_init":1,"dirs.queue_time_us":14228,"dirs.run_cpu_time_us":230,"dirs.run_wall_time_us":1711,"drs_written":1,"lbm_read_time_us":75,"lbm_reads_lt_1ms":4,"lbm_write_time_us":922,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1150}
I20260430 07:55:32.907229  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014381
I20260430 07:55:32.942406  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.544s	user 0.137s	sys 0.087s Metrics: {"bytes_written":71978,"cfile_cache_hit":10,"cfile_cache_hit_bytes":44927,"cfile_cache_miss":26,"cfile_cache_miss_bytes":125251,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":17587,"dirs.run_cpu_time_us":1869,"dirs.run_wall_time_us":10058,"drs_written":1,"lbm_read_time_us":1529,"lbm_reads_lt_1ms":54,"lbm_write_time_us":2413,"lbm_writes_1-10_ms":1,"lbm_writes_lt_1ms":24,"mutex_wait_us":6853,"num_input_rowsets":2,"peak_mem_usage":39361,"rows_written":2450,"thread_start_us":166323,"threads_started":2}
I20260430 07:55:32.943506  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 21689 bytes on disk
I20260430 07:55:32.944315  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":84,"lbm_reads_lt_1ms":4}
I20260430 07:55:32.950947  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:33.221874  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.571s	user 0.109s	sys 0.023s Metrics: {"bytes_written":78281,"cfile_cache_hit":10,"cfile_cache_hit_bytes":49492,"cfile_cache_miss":26,"cfile_cache_miss_bytes":137880,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":8959,"dirs.run_cpu_time_us":1628,"dirs.run_wall_time_us":16747,"drs_written":1,"lbm_read_time_us":5551,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":45,"lbm_write_time_us":2168,"lbm_writes_lt_1ms":25,"num_input_rowsets":2,"peak_mem_usage":41276,"rows_written":2700,"thread_start_us":67930,"threads_started":3}
I20260430 07:55:33.223109  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:33.238130  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.286s	user 0.083s	sys 0.028s Metrics: {"bytes_written":37120,"cfile_init":1,"dirs.queue_time_us":2718,"dirs.run_cpu_time_us":371,"dirs.run_wall_time_us":3243,"drs_written":1,"lbm_read_time_us":114,"lbm_reads_lt_1ms":4,"lbm_write_time_us":2023,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1250}
I20260430 07:55:33.254065  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 10801 bytes on disk
I20260430 07:55:33.254866  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":111,"lbm_reads_lt_1ms":4}
I20260430 07:55:33.255813  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.013809
I20260430 07:55:33.635011  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.718s	user 0.134s	sys 0.011s Metrics: {"bytes_written":84582,"cfile_cache_hit":10,"cfile_cache_hit_bytes":54052,"cfile_cache_miss":26,"cfile_cache_miss_bytes":150532,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":30919,"dirs.run_cpu_time_us":1510,"dirs.run_wall_time_us":26887,"drs_written":1,"lbm_read_time_us":23104,"lbm_reads_10-100_ms":1,"lbm_reads_lt_1ms":53,"lbm_write_time_us":1301,"lbm_writes_lt_1ms":25,"mutex_wait_us":1591,"num_input_rowsets":2,"peak_mem_usage":46486,"rows_written":2950,"spinlock_wait_cycles":4092288,"thread_start_us":82990,"threads_started":3}
I20260430 07:55:33.636430  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 26406 bytes on disk
I20260430 07:55:33.637125  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":78,"lbm_reads_lt_1ms":4}
I20260430 07:55:33.637950  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:33.682471   420 tablet_copy-itest.cc:1499] Blocks diff: 5
I20260430 07:55:33.924103  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.668s	user 0.190s	sys 0.183s Metrics: {"bytes_written":108130,"cfile_cache_hit":10,"cfile_cache_hit_bytes":67765,"cfile_cache_miss":28,"cfile_cache_miss_bytes":189127,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":11341,"dirs.run_cpu_time_us":1196,"dirs.run_wall_time_us":20238,"drs_written":1,"lbm_read_time_us":1687,"lbm_reads_lt_1ms":48,"lbm_write_time_us":1833,"lbm_writes_lt_1ms":26,"num_input_rowsets":2,"peak_mem_usage":55681,"rows_written":3700,"thread_start_us":252577,"threads_started":3}
I20260430 07:55:33.925482  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:34.138191  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.915s	user 0.117s	sys 0.026s Metrics: {"bytes_written":48605,"cfile_init":1,"dirs.queue_time_us":9624,"dirs.run_cpu_time_us":224,"dirs.run_wall_time_us":5806,"drs_written":1,"lbm_read_time_us":81,"lbm_reads_lt_1ms":4,"lbm_write_time_us":15199,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":22,"peak_mem_usage":0,"rows_written":1700,"thread_start_us":10050,"threads_started":1}
I20260430 07:55:34.141474  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 38923 bytes on disk
I20260430 07:55:34.142871  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":2,"lbm_read_time_us":215,"lbm_reads_lt_1ms":8}
I20260430 07:55:34.159971  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014392
I20260430 07:55:34.219779  4514 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: CompactRowSetsOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 1.486s	user 0.486s	sys 0.072s Metrics: {"bytes_written":313544,"cfile_cache_hit":10,"cfile_cache_hit_bytes":203825,"cfile_cache_miss":34,"cfile_cache_miss_bytes":565801,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":19043,"dirs.run_cpu_time_us":1506,"dirs.run_wall_time_us":22619,"drs_written":1,"lbm_read_time_us":19530,"lbm_reads_10-100_ms":1,"lbm_reads_lt_1ms":61,"lbm_write_time_us":3325,"lbm_writes_lt_1ms":33,"mutex_wait_us":6,"num_input_rowsets":2,"peak_mem_usage":166355,"rows_written":10850,"thread_start_us":191492,"threads_started":4,"wal-append.queue_time_us":57030}
I20260430 07:55:34.220920  4624 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 101929 bytes on disk
I20260430 07:55:34.221807  4514 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":105,"lbm_reads_lt_1ms":4}
I20260430 07:55:34.248962  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.322s	user 0.135s	sys 0.006s Metrics: {"bytes_written":71930,"cfile_init":1,"dirs.queue_time_us":1878,"dirs.run_cpu_time_us":400,"dirs.run_wall_time_us":2726,"drs_written":1,"lbm_read_time_us":95,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1384,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":2450}
I20260430 07:55:34.265920  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014819
I20260430 07:55:34.312999  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.675s	user 0.147s	sys 0.007s Metrics: {"bytes_written":78285,"cfile_init":1,"dirs.queue_time_us":43,"dirs.run_cpu_time_us":185,"dirs.run_wall_time_us":3829,"drs_written":1,"lbm_read_time_us":77,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1361,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":2700}
I20260430 07:55:34.314400  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 24174 bytes on disk
I20260430 07:55:34.341420  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":117,"lbm_reads_lt_1ms":4}
I20260430 07:55:34.342667  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014899
I20260430 07:55:34.741572   420 tablet_copy-itest.cc:1499] Blocks diff: 10
I20260430 07:55:34.908417  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.748s	user 0.190s	sys 0.028s Metrics: {"bytes_written":126401,"cfile_cache_hit":10,"cfile_cache_hit_bytes":80535,"cfile_cache_miss":28,"cfile_cache_miss_bytes":224799,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":36177,"dirs.run_cpu_time_us":1622,"dirs.run_wall_time_us":18195,"drs_written":1,"lbm_read_time_us":1390,"lbm_reads_lt_1ms":48,"lbm_write_time_us":24342,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":25,"num_input_rowsets":2,"peak_mem_usage":69201,"rows_written":4400,"thread_start_us":59479,"threads_started":3}
I20260430 07:55:34.911258  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 39706 bytes on disk
I20260430 07:55:34.912189  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":107,"lbm_reads_lt_1ms":4}
I20260430 07:55:34.915362  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:34.999068  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.727s	user 0.239s	sys 0.026s Metrics: {"bytes_written":176827,"cfile_cache_hit":10,"cfile_cache_hit_bytes":112498,"cfile_cache_miss":30,"cfile_cache_miss_bytes":314340,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":91765,"dirs.run_cpu_time_us":2303,"dirs.run_wall_time_us":12851,"drs_written":1,"lbm_read_time_us":18735,"lbm_reads_10-100_ms":1,"lbm_reads_lt_1ms":57,"lbm_write_time_us":3076,"lbm_writes_lt_1ms":28,"num_input_rowsets":2,"peak_mem_usage":93646,"rows_written":6150,"thread_start_us":159079,"threads_started":2}
I20260430 07:55:35.001964  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 56457 bytes on disk
I20260430 07:55:35.002689  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":85,"lbm_reads_lt_1ms":4}
I20260430 07:55:35.004031  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:35.270336  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.266s	user 0.178s	sys 0.016s Metrics: {"bytes_written":86012,"cfile_init":1,"dirs.queue_time_us":54,"dirs.run_cpu_time_us":186,"dirs.run_wall_time_us":1958,"drs_written":1,"lbm_read_time_us":95,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1254,"lbm_writes_lt_1ms":25,"peak_mem_usage":0,"rows_written":3000}
I20260430 07:55:35.273499  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 26421 bytes on disk
I20260430 07:55:35.276922  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":103,"lbm_reads_lt_1ms":4}
I20260430 07:55:35.283186  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014817
I20260430 07:55:35.387764   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 4483
I20260430 07:55:35.389149  4618 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:35.425848  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 1.028s	user 0.239s	sys 0.026s Metrics: {"bytes_written":163592,"cfile_cache_hit":11,"cfile_cache_hit_bytes":112498,"cfile_cache_miss":29,"cfile_cache_miss_bytes":279876,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":14119,"dirs.run_cpu_time_us":1228,"dirs.run_wall_time_us":11002,"drs_written":1,"lbm_read_time_us":1396,"lbm_reads_lt_1ms":49,"lbm_write_time_us":38259,"lbm_writes_10-100_ms":1,"lbm_writes_lt_1ms":27,"mutex_wait_us":2423,"num_input_rowsets":2,"peak_mem_usage":86021,"rows_written":5650,"thread_start_us":33917,"threads_started":3}
I20260430 07:55:35.427984  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:35.481510  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.566s	user 0.279s	sys 0.036s Metrics: {"bytes_written":130544,"cfile_init":1,"compiler_manager_pool.queue_time_us":256526,"compiler_manager_pool.run_cpu_time_us":12,"compiler_manager_pool.run_wall_time_us":12,"dirs.queue_time_us":69,"dirs.run_cpu_time_us":194,"dirs.run_wall_time_us":1847,"drs_written":1,"lbm_read_time_us":80,"lbm_reads_lt_1ms":4,"lbm_write_time_us":2010,"lbm_writes_lt_1ms":27,"peak_mem_usage":0,"rows_written":4550}
I20260430 07:55:35.482769  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014756
I20260430 07:55:35.622284   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 4483
I20260430 07:55:35.722684  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.439s	user 0.347s	sys 0.019s Metrics: {"bytes_written":264442,"cfile_cache_hit":10,"cfile_cache_hit_bytes":169327,"cfile_cache_miss":32,"cfile_cache_miss_bytes":471109,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":3577,"dirs.run_cpu_time_us":3657,"dirs.run_wall_time_us":8992,"drs_written":1,"lbm_read_time_us":2692,"lbm_reads_lt_1ms":52,"lbm_write_time_us":9556,"lbm_writes_1-10_ms":1,"lbm_writes_lt_1ms":31,"mutex_wait_us":33,"num_input_rowsets":2,"peak_mem_usage":139389,"rows_written":9150,"thread_start_us":2897,"threads_started":4}
I20260430 07:55:35.723948  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 85740 bytes on disk
I20260430 07:55:35.725127  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":102,"lbm_reads_lt_1ms":4}
I20260430 07:55:35.727254  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:35.763535  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.335s	user 0.234s	sys 0.040s Metrics: {"bytes_written":131691,"cfile_init":1,"dirs.queue_time_us":116,"dirs.run_cpu_time_us":240,"dirs.run_wall_time_us":2098,"drs_written":1,"lbm_read_time_us":265,"lbm_reads_lt_1ms":4,"lbm_write_time_us":2309,"lbm_writes_lt_1ms":27,"peak_mem_usage":0,"rows_written":4600}
I20260430 07:55:35.764576  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.014777
I20260430 07:55:35.816222  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.088s	user 0.076s	sys 0.004s Metrics: {"bytes_written":33194,"cfile_init":1,"dirs.queue_time_us":175,"dirs.run_cpu_time_us":676,"dirs.run_wall_time_us":2156,"drs_written":1,"lbm_read_time_us":84,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1782,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1100}
I20260430 07:55:35.817440  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 9601 bytes on disk
I20260430 07:55:35.818598  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":120,"lbm_reads_lt_1ms":4}
I20260430 07:55:35.819550  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.013901
I20260430 07:55:35.854267  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.371s	user 0.307s	sys 0.037s Metrics: {"bytes_written":259361,"cfile_cache_hit":12,"cfile_cache_hit_bytes":193106,"cfile_cache_miss":30,"cfile_cache_miss_bytes":431864,"cfile_init":6,"delta_iterators_relevant":4,"dirs.queue_time_us":4973,"dirs.run_cpu_time_us":1301,"dirs.run_wall_time_us":11390,"drs_written":1,"lbm_read_time_us":1962,"lbm_reads_lt_1ms":54,"lbm_write_time_us":2049,"lbm_writes_lt_1ms":32,"mutex_wait_us":3,"num_input_rowsets":2,"peak_mem_usage":136339,"rows_written":8950,"thread_start_us":4980,"threads_started":3}
I20260430 07:55:35.856251  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 83610 bytes on disk
I20260430 07:55:35.857328  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":77,"lbm_reads_lt_1ms":4}
I20260430 07:55:35.858129  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.000000
I20260430 07:55:35.950811  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: FlushMRSOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.092s	user 0.063s	sys 0.025s Metrics: {"bytes_written":38293,"cfile_init":1,"dirs.queue_time_us":102,"dirs.run_cpu_time_us":369,"dirs.run_wall_time_us":1459,"drs_written":1,"lbm_read_time_us":66,"lbm_reads_lt_1ms":4,"lbm_write_time_us":1796,"lbm_writes_lt_1ms":23,"peak_mem_usage":0,"rows_written":1300}
I20260430 07:55:35.952512  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 11329 bytes on disk
I20260430 07:55:35.954770  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":296,"lbm_reads_lt_1ms":4}
I20260430 07:55:35.955892  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad): perf score=1.013922
I20260430 07:55:36.105769  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.340s	user 0.322s	sys 0.016s Metrics: {"bytes_written":293396,"cfile_cache_hit":10,"cfile_cache_hit_bytes":189430,"cfile_cache_miss":34,"cfile_cache_miss_bytes":528906,"cfile_init":7,"delta_iterators_relevant":4,"dirs.queue_time_us":3360,"dirs.run_cpu_time_us":673,"dirs.run_wall_time_us":4475,"drs_written":1,"lbm_read_time_us":3474,"lbm_reads_1-10_ms":1,"lbm_reads_lt_1ms":61,"lbm_write_time_us":2021,"lbm_writes_lt_1ms":32,"mutex_wait_us":33,"num_input_rowsets":2,"peak_mem_usage":156162,"rows_written":10250,"spinlock_wait_cycles":17792,"thread_start_us":2535,"threads_started":5}
I20260430 07:55:36.106683  3732 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 96119 bytes on disk
I20260430 07:55:36.107788  3623 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":95,"lbm_reads_lt_1ms":4}
W20260430 07:55:36.203934  3621 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111) [suppressed 1 similar messages]
W20260430 07:55:36.213478  3621 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20260430 07:55:36.222838  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.403s	user 0.385s	sys 0.009s Metrics: {"bytes_written":293396,"cfile_cache_hit":10,"cfile_cache_hit_bytes":192425,"cfile_cache_miss":34,"cfile_cache_miss_bytes":534295,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":1404,"dirs.run_cpu_time_us":298,"dirs.run_wall_time_us":3956,"drs_written":1,"lbm_read_time_us":1820,"lbm_reads_lt_1ms":54,"lbm_write_time_us":3260,"lbm_writes_lt_1ms":32,"mutex_wait_us":170,"num_input_rowsets":2,"peak_mem_usage":157350,"rows_written":10250,"thread_start_us":268,"threads_started":1}
I20260430 07:55:36.223738  3863 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 96119 bytes on disk
I20260430 07:55:36.224256  3754 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":58,"lbm_reads_lt_1ms":4}
I20260430 07:55:36.317284  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: CompactRowSetsOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.361s	user 0.335s	sys 0.025s Metrics: {"bytes_written":293396,"cfile_cache_hit":10,"cfile_cache_hit_bytes":192225,"cfile_cache_miss":34,"cfile_cache_miss_bytes":533727,"cfile_init":5,"delta_iterators_relevant":4,"dirs.queue_time_us":1757,"dirs.run_cpu_time_us":326,"dirs.run_wall_time_us":1418,"drs_written":1,"lbm_read_time_us":1952,"lbm_reads_lt_1ms":54,"lbm_write_time_us":2098,"lbm_writes_lt_1ms":32,"mutex_wait_us":25,"num_input_rowsets":2,"peak_mem_usage":157150,"rows_written":10250,"spinlock_wait_cycles":6784,"thread_start_us":607,"threads_started":2}
I20260430 07:55:36.318260  4262 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 96119 bytes on disk
I20260430 07:55:36.319684  4153 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":793,"lbm_reads_lt_1ms":4}
W20260430 07:55:38.661782  3621 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
W20260430 07:55:41.069411  3621 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
W20260430 07:55:41.594681  3621 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111) [suppressed 10 similar messages]
W20260430 07:55:43.468145  3621 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 16: this message will repeat every 5th retry.
I20260430 07:55:45.902328  4762 consensus_queue.cc:579] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Leader has been unable to successfully communicate with peer 0696ac6914f940f2bcdc99c5d5c3d0e5 for more than 10 seconds (10.279s)
W20260430 07:55:45.904871  3621 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 21: this message will repeat every 5th retry.
I20260430 07:55:45.908385  3685 consensus_queue.cc:237] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 219, Committed index: 219, Last appended: 3.219, Last appended by leader: 218, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:45.911592  3816 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 3 index: 219. Preceding OpId from leader: term: 3 index: 220. (index mismatch)
I20260430 07:55:45.912642  4762 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 220, Last known committed idx: 219, Time since last communication: 0.000s
W20260430 07:55:45.913750  3621 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20260430 07:55:45.917299  4775 raft_consensus.cc:2955] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 LEADER]: Committing config change with OpId 3.220: config changed from index -1 to 220, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) added. New config: { opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } } }
I20260430 07:55:45.918192  3816 raft_consensus.cc:2955] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Committing config change with OpId 3.220: config changed from index -1 to 220, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) added. New config: { opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } } }
W20260430 07:55:45.922118  3619 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): Couldn't send request to peer 2e401b3aecfd46378718b182a4bec89f. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0d01e5768e6f435695871abd9deaee86. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:55:45.923174  4377 catalog_manager.cc:5184] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0d01e5768e6f435695871abd9deaee86 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20260430 07:55:45.924793  4388 catalog_manager.cc:5671] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 reported cstate change: config changed from index -1 to 220, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) added. New cstate: current_term: 3 leader_uuid: "9b6542a54f61418a894d790b5e1aa779" committed_config { opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20260430 07:55:46.450120  4785 ts_tablet_manager.cc:933] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Initiating tablet copy from peer 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:46.451077  4785 tablet_copy_client.cc:323] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.2:36961
I20260430 07:55:46.452594  3706 tablet_copy_service.cc:140] P 9b6542a54f61418a894d790b5e1aa779: Received BeginTabletCopySession request for tablet 0d01e5768e6f435695871abd9deaee86 from peer 2e401b3aecfd46378718b182a4bec89f ({username='slave'} at 127.0.105.4:42893)
I20260430 07:55:46.452857  3706 tablet_copy_service.cc:161] P 9b6542a54f61418a894d790b5e1aa779: Beginning new tablet copy session on tablet 0d01e5768e6f435695871abd9deaee86 from peer 2e401b3aecfd46378718b182a4bec89f at {username='slave'} at 127.0.105.4:42893: session id = 2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86
I20260430 07:55:46.456560  3706 tablet_copy_source_session.cc:215] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Tablet Copy: opened 5 blocks and 1 log segments
W20260430 07:55:46.457779  3706 tablet_copy_service.cc:227] P 9b6542a54f61418a894d790b5e1aa779: Timing out tablet copy session due to flag --tablet_copy_early_session_timeout_prob being set to 1
I20260430 07:55:46.458009  3706 tablet_copy_service.cc:434] P 9b6542a54f61418a894d790b5e1aa779: ending tablet copy session 2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86 on tablet 0d01e5768e6f435695871abd9deaee86 with peer 2e401b3aecfd46378718b182a4bec89f
I20260430 07:55:46.463543  4785 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0d01e5768e6f435695871abd9deaee86. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:46.469125  4785 tablet_copy_client.cc:806] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Starting download of 5 data blocks...
W20260430 07:55:46.471009  3706 tablet_copy_service.cc:479] P 9b6542a54f61418a894d790b5e1aa779: Error handling TabletCopyService RPC request from {username='slave'} at 127.0.105.4:42893: No such session: Not found: Tablet Copy session with Session ID "2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86" not found
W20260430 07:55:46.479390  4785 ts_tablet_manager.cc:1011] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Tablet Copy: Unable to fetch data from remote peer 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): Remote error: Unable to download block with id 15333039683372618950: Unable to download block 15333039683372618950: unable to fetch data from remote: No such session: NO_SESSION: received error code Not found: Tablet Copy session with Session ID "2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86" not found from remote service
I20260430 07:55:46.479786  4785 tablet_replica.cc:333] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: stopping tablet replica
I20260430 07:55:46.479916  4785 raft_consensus.cc:2243] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 3 LEARNER]: Raft consensus shutting down.
I20260430 07:55:46.480014  4785 raft_consensus.cc:2272] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 3 LEARNER]: Raft consensus is shut down!
I20260430 07:55:46.481366  3706 tablet_copy_service.cc:342] P 9b6542a54f61418a894d790b5e1aa779: Request end of tablet copy session 2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86 received from {username='slave'} at 127.0.105.4:42893
W20260430 07:55:46.481654  3706 tablet_copy_service.cc:479] P 9b6542a54f61418a894d790b5e1aa779: Error handling TabletCopyService RPC request from {username='slave'} at 127.0.105.4:42893: No such session: Not found: Tablet Copy session with Session ID "2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86" not found
W20260430 07:55:46.482792  4785 tablet_copy_client.cc:1131] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Unable to close tablet copy session: Remote error: failure ending tablet copy session: No such session: NO_SESSION: received error code Not found: Tablet Copy session with Session ID "2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86" not found from remote service
W20260430 07:55:46.483573  4785 tablet_metadata.cc:476] failed to load DataDirGroup from superblock: Already present: tried to load directory group for tablet 0d01e5768e6f435695871abd9deaee86 but one is already registered
I20260430 07:55:46.483798  4785 ts_tablet_manager.cc:1916] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:55:46.490150  4785 ts_tablet_manager.cc:1929] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.0
I20260430 07:55:46.755487   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3604
I20260430 07:55:46.756632  3726 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:46.973369   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3604
W20260430 07:55:47.037346  4152 connection.cc:570] server connection from 127.0.105.2:36749 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20260430 07:55:47.037464  4151 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv got EOF from 127.0.105.2:36961 (error 108)
W20260430 07:55:47.037911  4377 connection.cc:570] server connection from 127.0.105.2:40461 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:55:47.038141   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 3735
I20260430 07:55:47.039449  3857 generic_service.cc:196] Checking for leaks (request via RPC)
W20260430 07:55:47.039822  4151 consensus_peers.cc:597] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f -> Peer 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): Couldn't send request to peer 9b6542a54f61418a894d790b5e1aa779. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.2:36961: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20260430 07:55:47.257747   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 3735
I20260430 07:55:47.309649   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 4127
I20260430 07:55:47.310784  4256 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:47.477421   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 4127
I20260430 07:55:47.517791   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 4357
I20260430 07:55:47.519169  4421 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:47.638688   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 4357
I20260430 07:55:47.666049   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--webserver_interface=127.0.105.62
--webserver_port=42371
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000 with env {}
W20260430 07:55:48.107723  4797 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:48.108191  4797 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:48.108345  4797 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:48.117683  4797 flags.cc:432] Enabled experimental flag: --ipki_ca_key_size=768
W20260430 07:55:48.117800  4797 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:48.117854  4797 flags.cc:432] Enabled experimental flag: --tsk_num_rsa_bits=512
W20260430 07:55:48.117942  4797 flags.cc:432] Enabled experimental flag: --rpc_reuseport=true
I20260430 07:55:48.128962  4797 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.105.62:33525
--tserver_unresponsive_timeout_ms=5000
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.105.62:33525
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.105.62
--webserver_port=42371
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:48.131094  4797 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:48.133244  4797 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:48.144268  4802 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:48.144646  4805 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:48.144932  4803 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:48.145571  4797 server_base.cc:1061] running on GCE node
I20260430 07:55:48.146597  4797 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:48.148217  4797 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:48.149775  4797 hybrid_clock.cc:648] HybridClock initialized: now 1777535748149662 us; error 86 us; skew 500 ppm
I20260430 07:55:48.150465  4797 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:48.153323  4797 webserver.cc:492] Webserver started at http://127.0.105.62:42371/ using document root <none> and password file <none>
I20260430 07:55:48.154196  4797 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:48.154312  4797 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:48.160833  4797 fs_manager.cc:714] Time spent opening directory manager: real 0.004s	user 0.003s	sys 0.003s
I20260430 07:55:48.164301  4811 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:48.166049  4797 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.000s
I20260430 07:55:48.166262  4797 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
uuid: "3a75644a42d64405ae1dd61a955fbcc9"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:48.167009  4797 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:48.208388  4797 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:48.209565  4797 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:48.209967  4797 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:48.236307  4797 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.62:33525
I20260430 07:55:48.236369  4862 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.62:33525 every 8 connection(s)
I20260430 07:55:48.238328  4797 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb
I20260430 07:55:48.241878   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 4797
I20260430 07:55:48.243129   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.1:36583
--local_ip_for_outbound_sockets=127.0.105.1
--tserver_master_addrs=127.0.105.62:33525
--webserver_port=42573
--webserver_interface=127.0.105.1
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0
--tablet_copy_early_session_timeout_prob=0.0 with env {}
I20260430 07:55:48.246767  4863 sys_catalog.cc:263] Verifying existing consensus state
I20260430 07:55:48.251695  4863 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap starting.
I20260430 07:55:48.276044  4863 log.cc:826] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:48.302552  4863 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=7 ignored=0} mutations{seen=6 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:48.303336  4863 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Bootstrap complete.
I20260430 07:55:48.311136  4863 raft_consensus.cc:359] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:48.311954  4863 raft_consensus.cc:740] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Initialized, Role: FOLLOWER
I20260430 07:55:48.312695  4863 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 3.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:48.312952  4863 raft_consensus.cc:399] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20260430 07:55:48.313093  4863 raft_consensus.cc:493] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20260430 07:55:48.313320  4863 raft_consensus.cc:3060] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:55:48.317406  4863 raft_consensus.cc:515] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:48.317987  4863 leader_election.cc:304] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3a75644a42d64405ae1dd61a955fbcc9; no voters: 
I20260430 07:55:48.318797  4863 leader_election.cc:290] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [CANDIDATE]: Term 4 election: Requested vote from peers 
I20260430 07:55:48.318950  4867 raft_consensus.cc:2804] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 4 FOLLOWER]: Leader election won for term 4
I20260430 07:55:48.320118  4867 raft_consensus.cc:697] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [term 4 LEADER]: Becoming Leader. State: Replica: 3a75644a42d64405ae1dd61a955fbcc9, State: Running, Role: LEADER
I20260430 07:55:48.320832  4867 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 3.14, Last appended by leader: 14, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } }
I20260430 07:55:48.321426  4863 sys_catalog.cc:565] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: configured and running, proceeding with master startup.
I20260430 07:55:48.323858  4869 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3a75644a42d64405ae1dd61a955fbcc9. Latest consensus state: current_term: 4 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:48.324657  4869 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:48.325356  4868 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "3a75644a42d64405ae1dd61a955fbcc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3a75644a42d64405ae1dd61a955fbcc9" member_type: VOTER last_known_addr { host: "127.0.105.62" port: 33525 } } }
I20260430 07:55:48.325666  4868 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9 [sys.catalog]: This master's current role is: LEADER
I20260430 07:55:48.334977  4872 catalog_manager.cc:1485] Loading table and tablet metadata into memory...
I20260430 07:55:48.343467  4872 catalog_manager.cc:679] Loaded metadata for table table_b [id=ef3c0c5faf52454d8e8704dce8f9f4d0]
I20260430 07:55:48.344378  4872 catalog_manager.cc:679] Loaded metadata for table table_a [id=ff76305e07ff4c568227a9fd62f5f0d9]
I20260430 07:55:48.348153  4872 tablet_loader.cc:96] loaded metadata for tablet 0d01e5768e6f435695871abd9deaee86 (table table_a [id=ff76305e07ff4c568227a9fd62f5f0d9])
I20260430 07:55:48.349861  4872 tablet_loader.cc:96] loaded metadata for tablet 930fc8ad15e14df2af31bad7407f95ad (table table_b [id=ef3c0c5faf52454d8e8704dce8f9f4d0])
I20260430 07:55:48.350513  4872 catalog_manager.cc:1494] Initializing Kudu cluster ID...
I20260430 07:55:48.352579  4872 catalog_manager.cc:1269] Loaded cluster ID: c893b49e53f64173a896b4b2dba678c2
I20260430 07:55:48.353288  4872 catalog_manager.cc:1505] Initializing Kudu internal certificate authority...
I20260430 07:55:48.358748  4872 catalog_manager.cc:1514] Loading token signing keys...
I20260430 07:55:48.361531  4872 catalog_manager.cc:6055] T 00000000000000000000000000000000 P 3a75644a42d64405ae1dd61a955fbcc9: Loaded TSK: 0
I20260430 07:55:48.363847  4872 catalog_manager.cc:1524] Initializing in-progress tserver states...
W20260430 07:55:48.673647  4865 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:48.674074  4865 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:48.674230  4865 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:48.684376  4865 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:48.684628  4865 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:48.684798  4865 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.1
I20260430 07:55:48.696861  4865 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.1:36583
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.105.1
--webserver_port=42573
--flush_threshold_mb=0
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.1
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:48.698984  4865 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:48.701931  4865 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:48.713351  4890 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:48.713594  4889 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:48.715305  4892 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:48.716522  4865 server_base.cc:1061] running on GCE node
I20260430 07:55:48.717427  4865 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:48.718871  4865 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:48.720170  4865 hybrid_clock.cc:648] HybridClock initialized: now 1777535748720087 us; error 64 us; skew 500 ppm
I20260430 07:55:48.720605  4865 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:48.724010  4865 webserver.cc:492] Webserver started at http://127.0.105.1:42573/ using document root <none> and password file <none>
I20260430 07:55:48.725137  4865 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:48.725345  4865 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:48.734223  4865 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.007s	sys 0.000s
I20260430 07:55:48.747416  4901 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:48.749495  4865 fs_manager.cc:730] Time spent opening block manager: real 0.013s	user 0.006s	sys 0.000s
I20260430 07:55:48.749737  4865 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
format_stamp: "Formatted at 2026-04-30 07:55:18 on dist-test-slave-1g5s"
I20260430 07:55:48.750765  4865 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/data
Total live blocks: 5
Total live bytes: 415473
Total live bytes (after alignment): 425984
Total number of LBM containers: 6 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:48.772071  4865 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:48.773634  4865 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:48.773979  4865 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:48.775300  4865 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:48.778280  4908 ts_tablet_manager.cc:548] Loading tablet metadata (0/1 complete)
I20260430 07:55:48.787047  4865 ts_tablet_manager.cc:585] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20260430 07:55:48.787227  4865 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.009s	user 0.001s	sys 0.000s
I20260430 07:55:48.787348  4865 ts_tablet_manager.cc:600] Registering tablets (0/1 complete)
I20260430 07:55:48.793213  4865 ts_tablet_manager.cc:616] Registered 1 tablets
I20260430 07:55:48.793378  4865 ts_tablet_manager.cc:595] Time spent register tablets: real 0.006s	user 0.006s	sys 0.000s
I20260430 07:55:48.794662  4908 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap starting.
I20260430 07:55:48.853086  4865 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.1:36583
I20260430 07:55:48.853104  5014 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.1:36583 every 8 connection(s)
I20260430 07:55:48.855017  4865 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb
I20260430 07:55:48.864209   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 4865
I20260430 07:55:48.865082   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.2:36961
--local_ip_for_outbound_sockets=127.0.105.2
--tserver_master_addrs=127.0.105.62:33525
--webserver_port=38509
--webserver_interface=127.0.105.2
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0
--tablet_copy_early_session_timeout_prob=0.0 with env {}
I20260430 07:55:48.882947  5015 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:48.884327  5015 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:48.889125  5015 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:48.903023  4828 ts_manager.cc:194] Registered new tserver with Master: 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583)
I20260430 07:55:48.906884  4828 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.1:49543
I20260430 07:55:48.938048  4908 log.cc:826] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:49.141808  4908 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap replayed 1/1 log segments. Stats: ops{read=219 overwritten=0 applied=219 ignored=217} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:49.142863  4908 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Bootstrap complete.
I20260430 07:55:49.145066  4908 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Time spent bootstrapping tablet: real 0.351s	user 0.305s	sys 0.040s
I20260430 07:55:49.150786  4908 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:49.151599  4908 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0696ac6914f940f2bcdc99c5d5c3d0e5, State: Initialized, Role: FOLLOWER
I20260430 07:55:49.152550  4908 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 219, Last appended: 3.219, Last appended by leader: 219, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:49.154423  5015 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:49.155263  4908 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5: Time spent starting tablet: real 0.009s	user 0.004s	sys 0.004s
I20260430 07:55:49.166659  5016 maintenance_manager.cc:419] P 0696ac6914f940f2bcdc99c5d5c3d0e5: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 101929 bytes on disk
I20260430 07:55:49.168520  4906 maintenance_manager.cc:643] P 0696ac6914f940f2bcdc99c5d5c3d0e5: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":102,"lbm_reads_lt_1ms":4}
W20260430 07:55:49.337208  5019 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:49.337558  5019 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:49.337704  5019 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:49.347620  5019 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:49.347781  5019 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:49.347944  5019 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.2
I20260430 07:55:49.363974  5019 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.2:36961
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.105.2
--webserver_port=38509
--flush_threshold_mb=0
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.2
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:49.368076  5019 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:49.371831  5019 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:49.387892  5027 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:49.388597  5026 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:49.389113  5029 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:49.390717  5019 server_base.cc:1061] running on GCE node
I20260430 07:55:49.391680  5019 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:49.393262  5019 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:49.394632  5019 hybrid_clock.cc:648] HybridClock initialized: now 1777535749394589 us; error 69 us; skew 500 ppm
I20260430 07:55:49.395001  5019 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:49.397783  5019 webserver.cc:492] Webserver started at http://127.0.105.2:38509/ using document root <none> and password file <none>
I20260430 07:55:49.398636  5019 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:49.398836  5019 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:49.405306  5019 fs_manager.cc:714] Time spent opening directory manager: real 0.004s	user 0.005s	sys 0.001s
I20260430 07:55:49.420747  5039 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:49.423202  5019 fs_manager.cc:730] Time spent opening block manager: real 0.016s	user 0.001s	sys 0.004s
I20260430 07:55:49.423445  5019 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
uuid: "9b6542a54f61418a894d790b5e1aa779"
format_stamp: "Formatted at 2026-04-30 07:55:19 on dist-test-slave-1g5s"
I20260430 07:55:49.424283  5019 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/data
Total live blocks: 10
Total live bytes: 804988
Total live bytes (after alignment): 827392
Total number of LBM containers: 6 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:49.443612  5019 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:49.444705  5019 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:49.445014  5019 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:49.446125  5019 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:49.448493  5046 ts_tablet_manager.cc:548] Loading tablet metadata (0/2 complete)
I20260430 07:55:49.458292  5019 ts_tablet_manager.cc:585] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20260430 07:55:49.458518  5019 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.010s	user 0.001s	sys 0.000s
I20260430 07:55:49.458674  5019 ts_tablet_manager.cc:600] Registering tablets (0/2 complete)
I20260430 07:55:49.462570  5046 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Bootstrap starting.
I20260430 07:55:49.463835  5019 ts_tablet_manager.cc:616] Registered 2 tablets
I20260430 07:55:49.463956  5019 ts_tablet_manager.cc:595] Time spent register tablets: real 0.005s	user 0.004s	sys 0.000s
I20260430 07:55:49.525014  5019 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.2:36961
I20260430 07:55:49.525053  5152 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.2:36961 every 8 connection(s)
I20260430 07:55:49.526974  5019 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb
I20260430 07:55:49.530122   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 5019
I20260430 07:55:49.530931   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.3:40671
--local_ip_for_outbound_sockets=127.0.105.3
--tserver_master_addrs=127.0.105.62:33525
--webserver_port=34277
--webserver_interface=127.0.105.3
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0
--tablet_copy_early_session_timeout_prob=0.0 with env {}
I20260430 07:55:49.550910  5153 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:49.551397  5153 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:49.552498  5153 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:49.556094  4828 ts_manager.cc:194] Registered new tserver with Master: 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:49.557914  4828 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.2:38435
I20260430 07:55:49.599907  5046 log.cc:826] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:49.824774  5046 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Bootstrap replayed 1/1 log segments. Stats: ops{read=206 overwritten=0 applied=206 ignored=205} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:49.825549  5046 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Bootstrap complete.
I20260430 07:55:49.826964  5046 ts_tablet_manager.cc:1403] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Time spent bootstrapping tablet: real 0.365s	user 0.288s	sys 0.073s
I20260430 07:55:49.834200  5046 raft_consensus.cc:359] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:49.834995  5046 raft_consensus.cc:740] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Initialized, Role: FOLLOWER
I20260430 07:55:49.835930  5046 consensus_queue.cc:260] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:49.837520  5153 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:49.838122  5046 ts_tablet_manager.cc:1434] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779: Time spent starting tablet: real 0.011s	user 0.012s	sys 0.001s
I20260430 07:55:49.838614  5046 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Bootstrap starting.
I20260430 07:55:49.848632  5154 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 96119 bytes on disk
I20260430 07:55:49.851759  5044 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":127,"lbm_reads_lt_1ms":4}
W20260430 07:55:50.027724  5157 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:50.028134  5157 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:50.028306  5157 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:50.041133  5157 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:50.041497  5157 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:50.041688  5157 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.3
I20260430 07:55:50.055759  5157 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.3:40671
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.105.3
--webserver_port=34277
--flush_threshold_mb=0
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.3
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:50.058805  5157 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:50.061823  5157 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:50.077023  5165 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:50.077111  5166 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:50.080775  5168 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:50.081713  5157 server_base.cc:1061] running on GCE node
I20260430 07:55:50.082680  5157 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:50.084107  5157 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:50.085449  5157 hybrid_clock.cc:648] HybridClock initialized: now 1777535750085344 us; error 85 us; skew 500 ppm
I20260430 07:55:50.085882  5157 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:50.090111  5157 webserver.cc:492] Webserver started at http://127.0.105.3:34277/ using document root <none> and password file <none>
I20260430 07:55:50.091262  5157 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:50.091632  5157 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:50.102928  5157 fs_manager.cc:714] Time spent opening directory manager: real 0.007s	user 0.007s	sys 0.000s
I20260430 07:55:50.129133  5177 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:50.132752  5157 fs_manager.cc:730] Time spent opening block manager: real 0.028s	user 0.005s	sys 0.000s
I20260430 07:55:50.133219  5157 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:50.136929  5157 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/data
Total live blocks: 10
Total live bytes: 804988
Total live bytes (after alignment): 827392
Total number of LBM containers: 6 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:50.174995  5157 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:50.176000  5157 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:50.176344  5157 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:50.178817  5046 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Bootstrap replayed 1/1 log segments. Stats: ops{read=220 overwritten=0 applied=220 ignored=217} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:50.178817  5157 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:50.179363  5046 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Bootstrap complete.
I20260430 07:55:50.180495  5046 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Time spent bootstrapping tablet: real 0.342s	user 0.286s	sys 0.052s
I20260430 07:55:50.181813  5046 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:50.182139  5046 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Initialized, Role: FOLLOWER
I20260430 07:55:50.182546  5046 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 220, Last appended: 3.220, Last appended by leader: 220, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:50.182808  5184 ts_tablet_manager.cc:548] Loading tablet metadata (0/2 complete)
I20260430 07:55:50.183072  5046 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Time spent starting tablet: real 0.002s	user 0.004s	sys 0.000s
I20260430 07:55:50.193723  5154 maintenance_manager.cc:419] P 9b6542a54f61418a894d790b5e1aa779: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 101929 bytes on disk
I20260430 07:55:50.194787  5044 maintenance_manager.cc:643] P 9b6542a54f61418a894d790b5e1aa779: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":241,"lbm_reads_lt_1ms":4}
I20260430 07:55:50.198040  5157 ts_tablet_manager.cc:585] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20260430 07:55:50.198544  5157 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.017s	user 0.001s	sys 0.000s
I20260430 07:55:50.198784  5157 ts_tablet_manager.cc:600] Registering tablets (0/2 complete)
I20260430 07:55:50.203533  5184 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap starting.
I20260430 07:55:50.206156  5157 ts_tablet_manager.cc:616] Registered 2 tablets
I20260430 07:55:50.206296  5157 ts_tablet_manager.cc:595] Time spent register tablets: real 0.008s	user 0.004s	sys 0.003s
I20260430 07:55:50.275310  5157 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.3:40671
I20260430 07:55:50.275349  5290 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.3:40671 every 8 connection(s)
I20260430 07:55:50.277303  5157 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb
I20260430 07:55:50.286559   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 5157
I20260430 07:55:50.287320   420 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
/tmp/dist-test-taskupxCjQ/build/asan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.105.4:39005
--local_ip_for_outbound_sockets=127.0.105.4
--tserver_master_addrs=127.0.105.62:33525
--webserver_port=38779
--webserver_interface=127.0.105.4
--builtin_ntp_servers=127.0.105.20:38547
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--flush_threshold_mb=0
--maintenance_manager_polling_interval_ms=10
--follower_unavailable_considered_failed_sec=10
--tablet_copy_early_session_timeout_prob=1.0
--tablet_copy_early_session_timeout_prob=0.0 with env {}
I20260430 07:55:50.299587  5291 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:50.300201  5291 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:50.301641  5291 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:50.305688  4828 ts_manager.cc:194] Registered new tserver with Master: ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:50.307273  4828 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.3:53413
I20260430 07:55:50.334977  5184 log.cc:826] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:50.491710  5297 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:55:50.492069  5297 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:50.494181  5297 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:50.523010  5184 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap replayed 1/1 log segments. Stats: ops{read=206 overwritten=0 applied=206 ignored=205} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:50.523939  5184 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap complete.
I20260430 07:55:50.525772  5184 ts_tablet_manager.cc:1403] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent bootstrapping tablet: real 0.323s	user 0.260s	sys 0.060s
I20260430 07:55:50.528702  5108 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" candidate_term: 4 candidate_status { last_received { term: 3 index: 219 } } ignore_live_leader: false dest_uuid: "9b6542a54f61418a894d790b5e1aa779" is_pre_election: true
I20260430 07:55:50.529322  5108 raft_consensus.cc:2410] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 0696ac6914f940f2bcdc99c5d5c3d0e5 for term 4 because replica has last-logged OpId of term: 3 index: 220, which is greater than that of the candidate, which has last-logged OpId of term: 3 index: 219.
I20260430 07:55:50.531867  5184 raft_consensus.cc:359] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:50.533011  5184 raft_consensus.cc:740] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea6bd19fddfe4b988f4682a0bdec2adc, State: Initialized, Role: FOLLOWER
I20260430 07:55:50.533942  5184 consensus_queue.cc:260] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:50.540648  5246 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" candidate_term: 4 candidate_status { last_received { term: 3 index: 219 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:50.540963  5291 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:50.541819  5184 ts_tablet_manager.cc:1434] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent starting tablet: real 0.016s	user 0.014s	sys 0.001s
I20260430 07:55:50.542447  5184 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap starting.
W20260430 07:55:50.544682  4904 leader_election.cc:343] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 4 pre-election: Tablet error from VoteRequest() call to peer ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): Illegal state: must be running to vote when last-logged opid is not known
I20260430 07:55:50.544895  4904 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 0696ac6914f940f2bcdc99c5d5c3d0e5; no voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc
I20260430 07:55:50.545523  5292 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 96119 bytes on disk
I20260430 07:55:50.545835  5297 raft_consensus.cc:2749] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Leader pre-election lost for term 4. Reason: could not achieve majority
I20260430 07:55:50.547176  5182 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":112,"lbm_reads_lt_1ms":4}
W20260430 07:55:50.746014  5295 flags.cc:432] Enabled unsafe flag: --openssl_security_level_override=0
W20260430 07:55:50.746423  5295 flags.cc:432] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20260430 07:55:50.746574  5295 flags.cc:432] Enabled unsafe flag: --never_fsync=true
W20260430 07:55:50.757197  5295 flags.cc:432] Enabled experimental flag: --ipki_server_key_size=768
W20260430 07:55:50.757509  5295 flags.cc:432] Enabled experimental flag: --flush_threshold_mb=0
W20260430 07:55:50.757683  5295 flags.cc:432] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.105.4
I20260430 07:55:50.770023  5295 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.105.20:38547
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--follower_unavailable_considered_failed_sec=10
--fs_data_dirs=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.105.4:39005
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
--webserver_interface=127.0.105.4
--webserver_port=38779
--flush_threshold_mb=0
--tserver_master_addrs=127.0.105.62:33525
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--maintenance_manager_polling_interval_ms=10
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.105.4
--log_dir=/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8537ed5bb5e0e6ac1e481ff961e6b9d08cce7671
build type FASTDEBUG
built by None at 30 Apr 2026 07:43:22 UTC on bdcb31816ec0
build id 11674
ASAN enabled
I20260430 07:55:50.771906  5295 env_posix.cc:2267] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20260430 07:55:50.774116  5295 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20260430 07:55:50.787007  5308 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:50.787007  5309 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20260430 07:55:50.789815  5311 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20260430 07:55:50.791631  5295 server_base.cc:1061] running on GCE node
I20260430 07:55:50.792476  5295 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20260430 07:55:50.793830  5295 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20260430 07:55:50.797654  5295 hybrid_clock.cc:648] HybridClock initialized: now 1777535750797461 us; error 146 us; skew 500 ppm
I20260430 07:55:50.798430  5295 server_base.cc:861] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20260430 07:55:50.801813  5295 webserver.cc:492] Webserver started at http://127.0.105.4:38779/ using document root <none> and password file <none>
I20260430 07:55:50.803090  5295 fs_manager.cc:362] Metadata directory not provided
I20260430 07:55:50.803316  5295 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20260430 07:55:50.810689  5295 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.002s	sys 0.001s
I20260430 07:55:50.824183  5320 log_block_manager.cc:3869] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20260430 07:55:50.826895  5295 fs_manager.cc:730] Time spent opening block manager: real 0.015s	user 0.004s	sys 0.001s
I20260430 07:55:50.827160  5295 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data,/tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
uuid: "2e401b3aecfd46378718b182a4bec89f"
format_stamp: "Formatted at 2026-04-30 07:55:20 on dist-test-slave-1g5s"
I20260430 07:55:50.828071  5295 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
metadata directory: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal
1 data directories: /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/data
Total live blocks: 5
Total live bytes: 389515
Total live bytes (after alignment): 401408
Total number of LBM containers: 6 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20260430 07:55:50.842780  5184 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap replayed 1/1 log segments. Stats: ops{read=220 overwritten=0 applied=220 ignored=217} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:50.843312  5184 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Bootstrap complete.
I20260430 07:55:50.844669  5184 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent bootstrapping tablet: real 0.302s	user 0.249s	sys 0.046s
I20260430 07:55:50.845767  5184 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:50.845976  5184 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea6bd19fddfe4b988f4682a0bdec2adc, State: Initialized, Role: FOLLOWER
I20260430 07:55:50.846318  5184 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 220, Last appended: 3.220, Last appended by leader: 220, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:50.846875  5184 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc: Time spent starting tablet: real 0.002s	user 0.004s	sys 0.000s
I20260430 07:55:50.850744  5292 maintenance_manager.cc:419] P ea6bd19fddfe4b988f4682a0bdec2adc: Scheduling UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86): 101929 bytes on disk
I20260430 07:55:50.851480  5182 maintenance_manager.cc:643] P ea6bd19fddfe4b988f4682a0bdec2adc: UndoDeltaBlockGCOp(0d01e5768e6f435695871abd9deaee86) complete. Timing: real 0.000s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":98,"lbm_reads_lt_1ms":4}
I20260430 07:55:50.863914  5295 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20260430 07:55:50.865028  5295 env_posix.cc:2267] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20260430 07:55:50.865410  5295 kserver.cc:163] Server-wide thread pool size limit: 3276
I20260430 07:55:50.866580  5295 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20260430 07:55:50.869222  5327 ts_tablet_manager.cc:548] Loading tablet metadata (0/2 complete)
I20260430 07:55:50.883612  5295 ts_tablet_manager.cc:585] Loaded tablet metadata (2 total tablets, 1 live tablets)
I20260430 07:55:50.883908  5295 ts_tablet_manager.cc:531] Time spent load tablet metadata: real 0.015s	user 0.001s	sys 0.000s
I20260430 07:55:50.884094  5295 ts_tablet_manager.cc:600] Registering tablets (0/2 complete)
I20260430 07:55:50.886806  5295 ts_tablet_manager.cc:616] Registered 1 tablets
I20260430 07:55:50.886955  5295 ts_tablet_manager.cc:595] Time spent register tablets: real 0.003s	user 0.002s	sys 0.000s
I20260430 07:55:50.887616  5327 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Bootstrap starting.
I20260430 07:55:50.965567  5295 rpc_server.cc:307] RPC server started. Bound to: 127.0.105.4:39005
I20260430 07:55:50.965584  5433 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.105.4:39005 every 8 connection(s)
I20260430 07:55:50.967751  5295 server_base.cc:1193] Dumped server information to /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb
I20260430 07:55:50.974478   420 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu as pid 5295
I20260430 07:55:50.995155  5434 heartbeater.cc:344] Connected to a master server at 127.0.105.62:33525
I20260430 07:55:50.995689  5434 heartbeater.cc:461] Registering TS with master...
I20260430 07:55:50.997001  5434 heartbeater.cc:507] Master 127.0.105.62:33525 requested a full tablet report, sending...
I20260430 07:55:51.000468  4828 ts_manager.cc:194] Registered new tserver with Master: 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005)
I20260430 07:55:51.003755  4828 master_service.cc:502] Signed X509 certificate for tserver {username='slave'} at 127.0.105.4:50511
I20260430 07:55:51.015954   420 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20260430 07:55:51.016181   420 tablet_copy-itest.cc:1615] Running ksck...
I20260430 07:55:51.038327  5327 log.cc:826] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Log is configured to *not* fsync() on all Append() calls
I20260430 07:55:51.142184  5459 raft_consensus.cc:493] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:55:51.142527  5459 raft_consensus.cc:515] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:51.145545  5459 leader_election.cc:290] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:51.167593  5246 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:51.168116  5246 raft_consensus.cc:2468] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 1.
I20260430 07:55:51.169445  5042 leader_election.cc:304] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 
I20260430 07:55:51.170063  5459 raft_consensus.cc:2804] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20260430 07:55:51.171273  5459 raft_consensus.cc:493] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:55:51.171489  5459 raft_consensus.cc:3060] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:55:51.180693  5459 raft_consensus.cc:515] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:51.185058  5246 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
I20260430 07:55:51.185406  5246 raft_consensus.cc:3060] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:55:51.187700  5388 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "2e401b3aecfd46378718b182a4bec89f"
I20260430 07:55:51.191745  5246 raft_consensus.cc:2468] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 2.
W20260430 07:55:51.193210  5040 leader_election.cc:343] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): Illegal state: must be running to vote when last-logged opid is not known
I20260430 07:55:51.189504  5389 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "930fc8ad15e14df2af31bad7407f95ad" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "2e401b3aecfd46378718b182a4bec89f" is_pre_election: true
I20260430 07:55:51.193603  5042 leader_election.cc:304] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 2e401b3aecfd46378718b182a4bec89f
W20260430 07:55:51.194458  5040 leader_election.cc:343] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): Illegal state: must be running to vote when last-logged opid is not known
I20260430 07:55:51.199489  5459 leader_election.cc:290] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 2 election: Requested vote from peers 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:51.200605  5459 raft_consensus.cc:2804] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 2 FOLLOWER]: Leader election won for term 2
I20260430 07:55:51.202785  5459 raft_consensus.cc:697] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [term 2 LEADER]: Becoming Leader. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Running, Role: LEADER
I20260430 07:55:51.204952  5459 consensus_queue.cc:237] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:51.211652  4950 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:51.223989  5369 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:51.248242  5226 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:51.258689  4827 catalog_manager.cc:5671] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 reported cstate change: term changed from 1 to 2, leader changed from 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) to 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2). New cstate: current_term: 2 leader_uuid: "9b6542a54f61418a894d790b5e1aa779" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } health_report { overall_health: UNKNOWN } } }
I20260430 07:55:51.263289  5088 tablet_service.cc:1467] Tablet server has 1 leaders and 0 scanners
Master Summary
               UUID               |      Address       | Status
----------------------------------+--------------------+---------
 3a75644a42d64405ae1dd61a955fbcc9 | 127.0.105.62:33525 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                              Value                                                                               |      Tags       |         Master
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                             | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                             | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                             | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                               | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                              | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |       Value        |         Master
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 1 server(s) checked
 time_source         | builtin            | all 1 server(s) checked

Tablet Server Summary
               UUID               |      Address      | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | HEALTHY | <none>   |       0        |       0
 2e401b3aecfd46378718b182a4bec89f | 127.0.105.4:39005 | HEALTHY | <none>   |       0        |       0
 9b6542a54f61418a894d790b5e1aa779 | 127.0.105.2:36961 | HEALTHY | <none>   |       1        |       0
 ea6bd19fddfe4b988f4682a0bdec2adc | 127.0.105.3:40671 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       4

Unusual flags for Tablet Server:
                  Flag                   |                                                                            Value                                                                             |         Tags         |      Tablet Server
-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------------
 flush_threshold_mb                      | 0                                                                                                                                                            | runtime,experimental | all 4 server(s) checked
 ipki_server_key_size                    | 768                                                                                                                                                          | experimental         | all 4 server(s) checked
 local_ip_for_outbound_sockets           | 127.0.105.1                                                                                                                                                  | experimental         | 127.0.105.1:36583
 local_ip_for_outbound_sockets           | 127.0.105.2                                                                                                                                                  | experimental         | 127.0.105.2:36961
 local_ip_for_outbound_sockets           | 127.0.105.3                                                                                                                                                  | experimental         | 127.0.105.3:40671
 local_ip_for_outbound_sockets           | 127.0.105.4                                                                                                                                                  | experimental         | 127.0.105.4:39005
 maintenance_manager_polling_interval_ms | 10                                                                                                                                                           | hidden               | all 4 server(s) checked
 never_fsync                             | true                                                                                                                                                         | unsafe,advanced      | all 4 server(s) checked
 openssl_security_level_override         | 0                                                                                                                                                            | unsafe,hidden        | all 4 server(s) checked
 rpc_server_allow_ephemeral_ports        | true                                                                                                                                                         | unsafe               | all 4 server(s) checked
 server_dump_info_format                 | pb                                                                                                                                                           | hidden               | all 4 server(s) checked
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb | hidden               | 127.0.105.1:36583
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb | hidden               | 127.0.105.2:36961
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb | hidden               | 127.0.105.3:40671
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb | hidden               | 127.0.105.4:39005
 tablet_copy_early_session_timeout_prob  | 0                                                                                                                                                            | unsafe,runtime       | all 4 server(s) checked

Flags of checked categories for Tablet Server:
        Flag         |       Value        |      Tablet Server
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 4 server(s) checked
 time_source         | builtin            | all 4 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 5 server(s) checked

Tablet Summary
Tablet 930fc8ad15e14df2af31bad7407f95ad of table 'table_b' is under-replicated: 1 replica(s) not RUNNING
  2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): not running [LEADER]
    State:       BOOTSTRAPPING
    Data state:  TABLET_DATA_READY
    Last status: Bootstrap starting.
  9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): RUNNING
  ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): RUNNING
All reported replicas are:
  A = 2e401b3aecfd46378718b182a4bec89f
  B = 9b6542a54f61418a894d790b5e1aa779
  C = ea6bd19fddfe4b988f4682a0bdec2adc
The consensus matrix is:
 Config source |   Replicas   | Current term | Config index | Committed?
---------------+--------------+--------------+--------------+------------
 master        | A*  B   C    |              |              | Yes
 A             | A   B   C    | 1            | -1           | Yes
 B             | A   B*  C    | 2            | -1           | Yes
 C             | A   B   C    | 2            | -1           | Yes

Tablet 0d01e5768e6f435695871abd9deaee86 of table 'table_a' is conflicted: 4 replicas' active configs disagree with the leader master's
  0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): RUNNING
  9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): RUNNING [LEADER]
  ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): RUNNING
  2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): not running [NONVOTER]
    State:       INITIALIZED
    Data state:  TABLET_DATA_TOMBSTONED
    Last status: Tombstoned
All reported replicas are:
  A = 0696ac6914f940f2bcdc99c5d5c3d0e5
  B = 9b6542a54f61418a894d790b5e1aa779
  C = ea6bd19fddfe4b988f4682a0bdec2adc
  D = 2e401b3aecfd46378718b182a4bec89f
The consensus matrix is:
 Config source |     Replicas     | Current term | Config index | Committed?
---------------+------------------+--------------+--------------+------------
 master        | A   B*  C   D~   |              |              | Yes
 A             | A   B   C        | 3            | -1           | Yes
 B             | A   B   C   D~   | 3            | 220          | Yes
 C             | A   B   C   D~   | 3            | 220          | Yes
 D             | A   B   C   D~   | 3            | 220          | Yes

The cluster doesn't have any matching system tables
Summary by table
  Name   | RF |       Status       | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
---------+----+--------------------+---------------+---------+------------+------------------+-------------
 table_a | 3  | CONSENSUS_MISMATCH | 1             | 0       | 0          | 0                | 1
 table_b | 3  | UNDER_REPLICATED   | 1             | 0       | 0          | 1                | 0

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 2
 Median         | 2
 Third Quartile | 2
 Maximum        | 2

Tablet Replica Count Outliers
 Type  |               UUID               |       Host        | Replica Count
-------+----------------------------------+-------------------+---------------
 Small | 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 4
 Tables         | 2
 Tablets        | 2
 Replicas       | 7

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

==================
Errors:
==================
Corruption: table consistency check error: 2 out of 2 table(s) are not healthy

FAILED
I20260430 07:55:51.387612   420 cluster_verifier.cc:83] Check not successful yet, sleeping and retrying: Runtime error: ksck discovered errors
I20260430 07:55:51.407224  5327 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Bootstrap replayed 1/1 log segments. Stats: ops{read=206 overwritten=0 applied=206 ignored=205} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:51.408185  5327 tablet_bootstrap.cc:492] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Bootstrap complete.
I20260430 07:55:51.409907  5327 ts_tablet_manager.cc:1403] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Time spent bootstrapping tablet: real 0.523s	user 0.384s	sys 0.057s
I20260430 07:55:51.412700  5327 raft_consensus.cc:359] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:51.413609  5327 raft_consensus.cc:740] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2e401b3aecfd46378718b182a4bec89f, State: Initialized, Role: FOLLOWER
I20260430 07:55:51.414294  5327 consensus_queue.cc:260] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:51.415727  5434 heartbeater.cc:499] Master 127.0.105.62:33525 was elected leader, sending a full tablet report...
I20260430 07:55:51.415964  5327 ts_tablet_manager.cc:1434] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f: Time spent starting tablet: real 0.006s	user 0.005s	sys 0.000s
I20260430 07:55:51.422591  5435 maintenance_manager.cc:419] P 2e401b3aecfd46378718b182a4bec89f: Scheduling UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad): 96119 bytes on disk
I20260430 07:55:51.423888  5325 maintenance_manager.cc:643] P 2e401b3aecfd46378718b182a4bec89f: UndoDeltaBlockGCOp(930fc8ad15e14df2af31bad7407f95ad) complete. Timing: real 0.001s	user 0.000s	sys 0.000s Metrics: {"cfile_init":1,"lbm_read_time_us":83,"lbm_reads_lt_1ms":4}
I20260430 07:55:51.645920  5369 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:51.646983  4950 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:51.649945  5088 tablet_service.cc:1467] Tablet server has 1 leaders and 0 scanners
I20260430 07:55:51.652926  5226 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |      Address       | Status
----------------------------------+--------------------+---------
 3a75644a42d64405ae1dd61a955fbcc9 | 127.0.105.62:33525 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                              Value                                                                               |      Tags       |         Master
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                             | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                             | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                             | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                               | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                              | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |       Value        |         Master
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 1 server(s) checked
 time_source         | builtin            | all 1 server(s) checked

Tablet Server Summary
               UUID               |      Address      | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | HEALTHY | <none>   |       0        |       0
 2e401b3aecfd46378718b182a4bec89f | 127.0.105.4:39005 | HEALTHY | <none>   |       0        |       0
 9b6542a54f61418a894d790b5e1aa779 | 127.0.105.2:36961 | HEALTHY | <none>   |       1        |       0
 ea6bd19fddfe4b988f4682a0bdec2adc | 127.0.105.3:40671 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       4

Unusual flags for Tablet Server:
                  Flag                   |                                                                            Value                                                                             |         Tags         |      Tablet Server
-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------------
 flush_threshold_mb                      | 0                                                                                                                                                            | runtime,experimental | all 4 server(s) checked
 ipki_server_key_size                    | 768                                                                                                                                                          | experimental         | all 4 server(s) checked
 local_ip_for_outbound_sockets           | 127.0.105.1                                                                                                                                                  | experimental         | 127.0.105.1:36583
 local_ip_for_outbound_sockets           | 127.0.105.2                                                                                                                                                  | experimental         | 127.0.105.2:36961
 local_ip_for_outbound_sockets           | 127.0.105.3                                                                                                                                                  | experimental         | 127.0.105.3:40671
 local_ip_for_outbound_sockets           | 127.0.105.4                                                                                                                                                  | experimental         | 127.0.105.4:39005
 maintenance_manager_polling_interval_ms | 10                                                                                                                                                           | hidden               | all 4 server(s) checked
 never_fsync                             | true                                                                                                                                                         | unsafe,advanced      | all 4 server(s) checked
 openssl_security_level_override         | 0                                                                                                                                                            | unsafe,hidden        | all 4 server(s) checked
 rpc_server_allow_ephemeral_ports        | true                                                                                                                                                         | unsafe               | all 4 server(s) checked
 server_dump_info_format                 | pb                                                                                                                                                           | hidden               | all 4 server(s) checked
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb | hidden               | 127.0.105.1:36583
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb | hidden               | 127.0.105.2:36961
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb | hidden               | 127.0.105.3:40671
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb | hidden               | 127.0.105.4:39005
 tablet_copy_early_session_timeout_prob  | 0                                                                                                                                                            | unsafe,runtime       | all 4 server(s) checked

Flags of checked categories for Tablet Server:
        Flag         |       Value        |      Tablet Server
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 4 server(s) checked
 time_source         | builtin            | all 4 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 5 server(s) checked

Tablet Summary
Tablet 930fc8ad15e14df2af31bad7407f95ad of table 'table_b' is conflicted: 2 replicas' active configs disagree with the leader master's
  2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): RUNNING
  9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): RUNNING [LEADER]
  ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): RUNNING
All reported replicas are:
  A = 2e401b3aecfd46378718b182a4bec89f
  B = 9b6542a54f61418a894d790b5e1aa779
  C = ea6bd19fddfe4b988f4682a0bdec2adc
The consensus matrix is:
 Config source |   Replicas   | Current term | Config index | Committed?
---------------+--------------+--------------+--------------+------------
 master        | A   B*  C    |              |              | Yes
 A             | A   B   C    | 1            | -1           | Yes
 B             | A   B*  C    | 2            | -1           | Yes
 C             | A   B   C    | 2            | -1           | Yes

Tablet 0d01e5768e6f435695871abd9deaee86 of table 'table_a' is conflicted: 4 replicas' active configs disagree with the leader master's
  0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): RUNNING
  9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): RUNNING [LEADER]
  ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): RUNNING
  2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): not running [NONVOTER]
    State:       INITIALIZED
    Data state:  TABLET_DATA_TOMBSTONED
    Last status: Tombstoned
All reported replicas are:
  A = 0696ac6914f940f2bcdc99c5d5c3d0e5
  B = 9b6542a54f61418a894d790b5e1aa779
  C = ea6bd19fddfe4b988f4682a0bdec2adc
  D = 2e401b3aecfd46378718b182a4bec89f
The consensus matrix is:
 Config source |     Replicas     | Current term | Config index | Committed?
---------------+------------------+--------------+--------------+------------
 master        | A   B*  C   D~   |              |              | Yes
 A             | A   B   C        | 3            | -1           | Yes
 B             | A   B   C   D~   | 3            | 220          | Yes
 C             | A   B   C   D~   | 3            | 220          | Yes
 D             | A   B   C   D~   | 3            | 220          | Yes

The cluster doesn't have any matching system tables
Summary by table
  Name   | RF |       Status       | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
---------+----+--------------------+---------------+---------+------------+------------------+-------------
 table_a | 3  | CONSENSUS_MISMATCH | 1             | 0       | 0          | 0                | 1
 table_b | 3  | CONSENSUS_MISMATCH | 1             | 0       | 0          | 0                | 1

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 2
 Median         | 2
 Third Quartile | 2
 Maximum        | 2

Tablet Replica Count Outliers
 Type  |               UUID               |       Host        | Replica Count
-------+----------------------------------+-------------------+---------------
 Small | 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 4
 Tables         | 2
 Tablets        | 2
 Replicas       | 7

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

==================
Errors:
==================
Corruption: table consistency check error: 2 out of 2 table(s) are not healthy

FAILED
I20260430 07:55:51.732538   420 cluster_verifier.cc:83] Check not successful yet, sleeping and retrying: Runtime error: ksck discovered errors
I20260430 07:55:51.774150  5388 raft_consensus.cc:3060] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 1 FOLLOWER]: Advancing to term 2
I20260430 07:55:51.778895  5388 raft_consensus.cc:1275] T 930fc8ad15e14df2af31bad7407f95ad P 2e401b3aecfd46378718b182a4bec89f [term 2 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 1 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20260430 07:55:51.780174  5506 consensus_queue.cc:1048] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: VOTER last_known_addr { host: "127.0.105.4" port: 39005 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20260430 07:55:51.786988  5246 raft_consensus.cc:1275] T 930fc8ad15e14df2af31bad7407f95ad P ea6bd19fddfe4b988f4682a0bdec2adc [term 2 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 1 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20260430 07:55:51.787835  5506 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20260430 07:55:51.788059  5506 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:51.789062  5506 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:51.789456  5506 consensus_queue.cc:1048] T 930fc8ad15e14df2af31bad7407f95ad P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20260430 07:55:51.793586  5246 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 4 candidate_status { last_received { term: 3 index: 220 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" is_pre_election: true
I20260430 07:55:51.793902  5246 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 3.
I20260430 07:55:51.794679  5042 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 
I20260430 07:55:51.795197  5506 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Leader pre-election won for term 4
I20260430 07:55:51.795359  5506 raft_consensus.cc:493] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20260430 07:55:51.795469  5506 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:55:51.799067  5506 raft_consensus.cc:515] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 4 FOLLOWER]: Starting leader election with config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:51.800007  5506 leader_election.cc:290] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 4 election: Requested vote from peers 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583), ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671)
I20260430 07:55:51.801077  5245 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 4 candidate_status { last_received { term: 3 index: 220 } } ignore_live_leader: false dest_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc"
I20260430 07:55:51.801921  5245 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:55:51.802505  4970 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 4 candidate_status { last_received { term: 3 index: 220 } } ignore_live_leader: false dest_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" is_pre_election: true
I20260430 07:55:51.802923  4970 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 3.
I20260430 07:55:51.803052  4969 tablet_service.cc:1917] Received RequestConsensusVote() RPC: tablet_id: "0d01e5768e6f435695871abd9deaee86" candidate_uuid: "9b6542a54f61418a894d790b5e1aa779" candidate_term: 4 candidate_status { last_received { term: 3 index: 220 } } ignore_live_leader: false dest_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5"
I20260430 07:55:51.803331  4969 raft_consensus.cc:3060] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 3 FOLLOWER]: Advancing to term 4
I20260430 07:55:51.806444  5245 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 4.
I20260430 07:55:51.807013  5042 leader_election.cc:304] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9b6542a54f61418a894d790b5e1aa779, ea6bd19fddfe4b988f4682a0bdec2adc; no voters: 
I20260430 07:55:51.807397  5506 raft_consensus.cc:2804] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 4 FOLLOWER]: Leader election won for term 4
I20260430 07:55:51.807574  5506 raft_consensus.cc:697] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 4 LEADER]: Becoming Leader. State: Replica: 9b6542a54f61418a894d790b5e1aa779, State: Running, Role: LEADER
I20260430 07:55:51.807889  5506 consensus_queue.cc:237] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 220, Committed index: 220, Last appended: 3.220, Last appended by leader: 220, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:51.808671  4969 raft_consensus.cc:2468] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9b6542a54f61418a894d790b5e1aa779 in term 4.
I20260430 07:55:51.811332  4827 catalog_manager.cc:5671] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 reported cstate change: term changed from 3 to 4. New cstate: current_term: 4 leader_uuid: "9b6542a54f61418a894d790b5e1aa779" committed_config { opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20260430 07:55:52.086410  5369 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:52.090129  4950 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:52.093446  5088 tablet_service.cc:1467] Tablet server has 2 leaders and 0 scanners
I20260430 07:55:52.105592  5226 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |      Address       | Status
----------------------------------+--------------------+---------
 3a75644a42d64405ae1dd61a955fbcc9 | 127.0.105.62:33525 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                              Value                                                                               |      Tags       |         Master
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                             | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                             | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                             | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                               | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                              | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |       Value        |         Master
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 1 server(s) checked
 time_source         | builtin            | all 1 server(s) checked

Tablet Server Summary
               UUID               |      Address      | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | HEALTHY | <none>   |       0        |       0
 2e401b3aecfd46378718b182a4bec89f | 127.0.105.4:39005 | HEALTHY | <none>   |       0        |       0
 9b6542a54f61418a894d790b5e1aa779 | 127.0.105.2:36961 | HEALTHY | <none>   |       2        |       0
 ea6bd19fddfe4b988f4682a0bdec2adc | 127.0.105.3:40671 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       4

Unusual flags for Tablet Server:
                  Flag                   |                                                                            Value                                                                             |         Tags         |      Tablet Server
-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------------
 flush_threshold_mb                      | 0                                                                                                                                                            | runtime,experimental | all 4 server(s) checked
 ipki_server_key_size                    | 768                                                                                                                                                          | experimental         | all 4 server(s) checked
 local_ip_for_outbound_sockets           | 127.0.105.1                                                                                                                                                  | experimental         | 127.0.105.1:36583
 local_ip_for_outbound_sockets           | 127.0.105.2                                                                                                                                                  | experimental         | 127.0.105.2:36961
 local_ip_for_outbound_sockets           | 127.0.105.3                                                                                                                                                  | experimental         | 127.0.105.3:40671
 local_ip_for_outbound_sockets           | 127.0.105.4                                                                                                                                                  | experimental         | 127.0.105.4:39005
 maintenance_manager_polling_interval_ms | 10                                                                                                                                                           | hidden               | all 4 server(s) checked
 never_fsync                             | true                                                                                                                                                         | unsafe,advanced      | all 4 server(s) checked
 openssl_security_level_override         | 0                                                                                                                                                            | unsafe,hidden        | all 4 server(s) checked
 rpc_server_allow_ephemeral_ports        | true                                                                                                                                                         | unsafe               | all 4 server(s) checked
 server_dump_info_format                 | pb                                                                                                                                                           | hidden               | all 4 server(s) checked
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb | hidden               | 127.0.105.1:36583
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb | hidden               | 127.0.105.2:36961
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb | hidden               | 127.0.105.3:40671
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb | hidden               | 127.0.105.4:39005
 tablet_copy_early_session_timeout_prob  | 0                                                                                                                                                            | unsafe,runtime       | all 4 server(s) checked

Flags of checked categories for Tablet Server:
        Flag         |       Value        |      Tablet Server
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 4 server(s) checked
 time_source         | builtin            | all 4 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 5 server(s) checked

Tablet Summary
Tablet 0d01e5768e6f435695871abd9deaee86 of table 'table_a' is conflicted: 3 replicas' active configs disagree with the leader master's
  0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): RUNNING
  9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): RUNNING [LEADER]
  ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): RUNNING
  2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): not running [NONVOTER]
    State:       INITIALIZED
    Data state:  TABLET_DATA_TOMBSTONED
    Last status: Tombstoned
All reported replicas are:
  A = 0696ac6914f940f2bcdc99c5d5c3d0e5
  B = 9b6542a54f61418a894d790b5e1aa779
  C = ea6bd19fddfe4b988f4682a0bdec2adc
  D = 2e401b3aecfd46378718b182a4bec89f
The consensus matrix is:
 Config source |     Replicas     | Current term | Config index | Committed?
---------------+------------------+--------------+--------------+------------
 master        | A   B*  C   D~   |              |              | Yes
 A             | A   B   C        | 4            | -1           | Yes
 B             | A   B*  C   D~   | 4            | 220          | Yes
 C             | A   B   C   D~   | 4            | 220          | Yes
 D             | A   B   C   D~   | 3            | 220          | Yes

The cluster doesn't have any matching system tables
Summary by table
  Name   | RF |       Status       | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
---------+----+--------------------+---------------+---------+------------+------------------+-------------
 table_b | 3  | HEALTHY            | 1             | 1       | 0          | 0                | 0
 table_a | 3  | CONSENSUS_MISMATCH | 1             | 0       | 0          | 0                | 1

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 2
 Median         | 2
 Third Quartile | 2
 Maximum        | 2

Tablet Replica Count Outliers
 Type  |               UUID               |       Host        | Replica Count
-------+----------------------------------+-------------------+---------------
 Small | 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 4
 Tables         | 2
 Tablets        | 2
 Replicas       | 7

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

==================
Errors:
==================
Corruption: table consistency check error: 1 out of 2 table(s) are not healthy

FAILED
I20260430 07:55:52.166693   420 cluster_verifier.cc:83] Check not successful yet, sleeping and retrying: Runtime error: ksck discovered errors
I20260430 07:55:52.314810  4969 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 4 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 3 index: 219. Preceding OpId from leader: term: 4 index: 221. (index mismatch)
I20260430 07:55:52.316097  5506 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 221, Last known committed idx: 219, Time since last communication: 0.000s
I20260430 07:55:52.320797  4969 raft_consensus.cc:2955] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 4 FOLLOWER]: Committing config change with OpId 3.220: config changed from index -1 to 220, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) added. New config: { opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } } }
I20260430 07:55:52.328809  5245 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 4 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 3 index: 220. Preceding OpId from leader: term: 4 index: 221. (index mismatch)
W20260430 07:55:52.329799  5040 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): Couldn't send request to peer 2e401b3aecfd46378718b182a4bec89f. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: INITIALIZED. This is attempt 1: this message will repeat every 5th retry.
I20260430 07:55:52.330123  5548 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 221, Last known committed idx: 220, Time since last communication: 0.000s
I20260430 07:55:52.624782  5369 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:52.629160  4950 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:52.634585  5088 tablet_service.cc:1467] Tablet server has 2 leaders and 0 scanners
I20260430 07:55:52.637531  5226 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |      Address       | Status
----------------------------------+--------------------+---------
 3a75644a42d64405ae1dd61a955fbcc9 | 127.0.105.62:33525 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                              Value                                                                               |      Tags       |         Master
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                             | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                             | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                             | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                               | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                              | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |       Value        |         Master
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 1 server(s) checked
 time_source         | builtin            | all 1 server(s) checked

Tablet Server Summary
               UUID               |      Address      | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | HEALTHY | <none>   |       0        |       0
 2e401b3aecfd46378718b182a4bec89f | 127.0.105.4:39005 | HEALTHY | <none>   |       0        |       0
 9b6542a54f61418a894d790b5e1aa779 | 127.0.105.2:36961 | HEALTHY | <none>   |       2        |       0
 ea6bd19fddfe4b988f4682a0bdec2adc | 127.0.105.3:40671 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       4

Unusual flags for Tablet Server:
                  Flag                   |                                                                            Value                                                                             |         Tags         |      Tablet Server
-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------------
 flush_threshold_mb                      | 0                                                                                                                                                            | runtime,experimental | all 4 server(s) checked
 ipki_server_key_size                    | 768                                                                                                                                                          | experimental         | all 4 server(s) checked
 local_ip_for_outbound_sockets           | 127.0.105.1                                                                                                                                                  | experimental         | 127.0.105.1:36583
 local_ip_for_outbound_sockets           | 127.0.105.2                                                                                                                                                  | experimental         | 127.0.105.2:36961
 local_ip_for_outbound_sockets           | 127.0.105.3                                                                                                                                                  | experimental         | 127.0.105.3:40671
 local_ip_for_outbound_sockets           | 127.0.105.4                                                                                                                                                  | experimental         | 127.0.105.4:39005
 maintenance_manager_polling_interval_ms | 10                                                                                                                                                           | hidden               | all 4 server(s) checked
 never_fsync                             | true                                                                                                                                                         | unsafe,advanced      | all 4 server(s) checked
 openssl_security_level_override         | 0                                                                                                                                                            | unsafe,hidden        | all 4 server(s) checked
 rpc_server_allow_ephemeral_ports        | true                                                                                                                                                         | unsafe               | all 4 server(s) checked
 server_dump_info_format                 | pb                                                                                                                                                           | hidden               | all 4 server(s) checked
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb | hidden               | 127.0.105.1:36583
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb | hidden               | 127.0.105.2:36961
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb | hidden               | 127.0.105.3:40671
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb | hidden               | 127.0.105.4:39005
 tablet_copy_early_session_timeout_prob  | 0                                                                                                                                                            | unsafe,runtime       | all 4 server(s) checked

Flags of checked categories for Tablet Server:
        Flag         |       Value        |      Tablet Server
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 4 server(s) checked
 time_source         | builtin            | all 4 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 5 server(s) checked

Tablet Summary
Tablet 0d01e5768e6f435695871abd9deaee86 of table 'table_a' is conflicted: 1 replicas' active configs disagree with the leader master's
  0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): RUNNING
  9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961): RUNNING [LEADER]
  ea6bd19fddfe4b988f4682a0bdec2adc (127.0.105.3:40671): RUNNING
  2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): not running [NONVOTER]
    State:       INITIALIZED
    Data state:  TABLET_DATA_TOMBSTONED
    Last status: Tombstoned
All reported replicas are:
  A = 0696ac6914f940f2bcdc99c5d5c3d0e5
  B = 9b6542a54f61418a894d790b5e1aa779
  C = ea6bd19fddfe4b988f4682a0bdec2adc
  D = 2e401b3aecfd46378718b182a4bec89f
The consensus matrix is:
 Config source |     Replicas     | Current term | Config index | Committed?
---------------+------------------+--------------+--------------+------------
 master        | A   B*  C   D~   |              |              | Yes
 A             | A   B*  C   D~   | 4            | 220          | Yes
 B             | A   B*  C   D~   | 4            | 220          | Yes
 C             | A   B*  C   D~   | 4            | 220          | Yes
 D             | A   B   C   D~   | 3            | 220          | Yes

The cluster doesn't have any matching system tables
Summary by table
  Name   | RF |       Status       | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
---------+----+--------------------+---------------+---------+------------+------------------+-------------
 table_b | 3  | HEALTHY            | 1             | 1       | 0          | 0                | 0
 table_a | 3  | CONSENSUS_MISMATCH | 1             | 0       | 0          | 0                | 1

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 2
 Median         | 2
 Third Quartile | 2
 Maximum        | 2

Tablet Replica Count Outliers
 Type  |               UUID               |       Host        | Replica Count
-------+----------------------------------+-------------------+---------------
 Small | 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 4
 Tables         | 2
 Tablets        | 2
 Replicas       | 7

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

==================
Errors:
==================
Corruption: table consistency check error: 1 out of 2 table(s) are not healthy

FAILED
I20260430 07:55:52.700335   420 cluster_verifier.cc:83] Check not successful yet, sleeping and retrying: Runtime error: ksck discovered errors
I20260430 07:55:52.730684  5583 tablet_replica.cc:333] stopping tablet replica
I20260430 07:55:52.730875  5583 raft_consensus.cc:2243] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 3 LEARNER]: Raft consensus shutting down.
I20260430 07:55:52.731081  5583 raft_consensus.cc:2272] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 3 LEARNER]: Raft consensus is shut down!
I20260430 07:55:52.731351  5583 ts_tablet_manager.cc:933] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Initiating tablet copy from peer 9b6542a54f61418a894d790b5e1aa779 (127.0.105.2:36961)
I20260430 07:55:52.732373  5583 tablet_copy_client.cc:323] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Beginning tablet copy session from remote peer at address 127.0.105.2:36961
I20260430 07:55:52.739986  5128 tablet_copy_service.cc:140] P 9b6542a54f61418a894d790b5e1aa779: Received BeginTabletCopySession request for tablet 0d01e5768e6f435695871abd9deaee86 from peer 2e401b3aecfd46378718b182a4bec89f ({username='slave'} at 127.0.105.4:58181)
I20260430 07:55:52.740257  5128 tablet_copy_service.cc:161] P 9b6542a54f61418a894d790b5e1aa779: Beginning new tablet copy session on tablet 0d01e5768e6f435695871abd9deaee86 from peer 2e401b3aecfd46378718b182a4bec89f at {username='slave'} at 127.0.105.4:58181: session id = 2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86
I20260430 07:55:52.743489  5128 tablet_copy_source_session.cc:215] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779: Tablet Copy: opened 5 blocks and 1 log segments
I20260430 07:55:52.746346  5583 ts_tablet_manager.cc:1916] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Deleting tablet data with delete state TABLET_DATA_COPYING
I20260430 07:55:52.752343  5583 ts_tablet_manager.cc:1929] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 1.0
I20260430 07:55:52.752983  5583 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0d01e5768e6f435695871abd9deaee86. 1 dirs total, 0 dirs full, 0 dirs failed
I20260430 07:55:52.755776  5583 tablet_copy_client.cc:806] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Starting download of 5 data blocks...
I20260430 07:55:52.771588  5583 tablet_copy_client.cc:670] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Starting download of 1 WAL segments...
I20260430 07:55:52.777829  5583 tablet_copy_client.cc:538] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20260430 07:55:52.781029  5583 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Bootstrap starting.
I20260430 07:55:52.906539  5108 consensus_queue.cc:237] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 221, Committed index: 221, Last appended: 4.221, Last appended by leader: 220, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 222 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } }
I20260430 07:55:52.908993  4969 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 4 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 4 index: 221. Preceding OpId from leader: term: 4 index: 222. (index mismatch)
I20260430 07:55:52.908964  5245 raft_consensus.cc:1275] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 4 FOLLOWER]: Refusing update from remote peer 9b6542a54f61418a894d790b5e1aa779: Log matching property violated. Preceding OpId in replica: term: 4 index: 221. Preceding OpId from leader: term: 4 index: 222. (index mismatch)
I20260430 07:55:52.909929  5548 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 222, Last known committed idx: 221, Time since last communication: 0.000s
I20260430 07:55:52.910449  5546 consensus_queue.cc:1048] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 222, Last known committed idx: 221, Time since last communication: 0.000s
I20260430 07:55:52.913296  5506 raft_consensus.cc:2955] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 [term 4 LEADER]: Committing config change with OpId 4.222: config changed from index 220 to 222, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) evicted. New config: { opid_index: 222 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } }
I20260430 07:55:52.913970  5245 raft_consensus.cc:2955] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc [term 4 FOLLOWER]: Committing config change with OpId 4.222: config changed from index 220 to 222, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) evicted. New config: { opid_index: 222 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } }
I20260430 07:55:52.918009  4969 raft_consensus.cc:2955] T 0d01e5768e6f435695871abd9deaee86 P 0696ac6914f940f2bcdc99c5d5c3d0e5 [term 4 FOLLOWER]: Committing config change with OpId 4.222: config changed from index 220 to 222, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) evicted. New config: { opid_index: 222 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } }
I20260430 07:55:52.922873  4814 catalog_manager.cc:5184] ChangeConfig:REMOVE_PEER RPC for tablet 0d01e5768e6f435695871abd9deaee86 with cas_config_opid_index 220: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20260430 07:55:52.924006  4827 catalog_manager.cc:5671] T 0d01e5768e6f435695871abd9deaee86 P ea6bd19fddfe4b988f4682a0bdec2adc reported cstate change: config changed from index 220 to 222, NON_VOTER 2e401b3aecfd46378718b182a4bec89f (127.0.105.4) evicted. New cstate: current_term: 4 leader_uuid: "9b6542a54f61418a894d790b5e1aa779" committed_config { opid_index: 222 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } }
I20260430 07:55:52.939564  5369 tablet_service.cc:1558] Processing DeleteTablet for tablet 0d01e5768e6f435695871abd9deaee86 with delete_type TABLET_DATA_TOMBSTONED (TS 2e401b3aecfd46378718b182a4bec89f not found in new config with opid_index 222) from {username='slave'} at 127.0.0.1:47904
W20260430 07:55:52.942090  4812 catalog_manager.cc:4982] TS 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): delete failed for tablet 0d01e5768e6f435695871abd9deaee86 because tablet deleting was already in progress. No further retry: Already present: State transition of tablet 0d01e5768e6f435695871abd9deaee86 already in progress: copying tablet
I20260430 07:55:53.053570  5583 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Bootstrap replayed 1/1 log segments. Stats: ops{read=221 overwritten=0 applied=221 ignored=217} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20260430 07:55:53.054167  5583 tablet_bootstrap.cc:492] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Bootstrap complete.
I20260430 07:55:53.054548  5583 ts_tablet_manager.cc:1403] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Time spent bootstrapping tablet: real 0.274s	user 0.251s	sys 0.024s
I20260430 07:55:53.055486  5583 raft_consensus.cc:359] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:53.055735  5583 raft_consensus.cc:740] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: 2e401b3aecfd46378718b182a4bec89f, State: Initialized, Role: LEARNER
I20260430 07:55:53.055991  5583 consensus_queue.cc:260] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 221, Last appended: 4.221, Last appended by leader: 221, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 220 OBSOLETE_local: false peers { permanent_uuid: "0696ac6914f940f2bcdc99c5d5c3d0e5" member_type: VOTER last_known_addr { host: "127.0.105.1" port: 36583 } } peers { permanent_uuid: "9b6542a54f61418a894d790b5e1aa779" member_type: VOTER last_known_addr { host: "127.0.105.2" port: 36961 } } peers { permanent_uuid: "ea6bd19fddfe4b988f4682a0bdec2adc" member_type: VOTER last_known_addr { host: "127.0.105.3" port: 40671 } } peers { permanent_uuid: "2e401b3aecfd46378718b182a4bec89f" member_type: NON_VOTER last_known_addr { host: "127.0.105.4" port: 39005 } attrs { promote: true } }
I20260430 07:55:53.056941  5583 ts_tablet_manager.cc:1434] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Time spent starting tablet: real 0.002s	user 0.003s	sys 0.000s
I20260430 07:55:53.058890  5128 tablet_copy_service.cc:342] P 9b6542a54f61418a894d790b5e1aa779: Request end of tablet copy session 2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86 received from {username='slave'} at 127.0.105.4:58181
I20260430 07:55:53.059741  5128 tablet_copy_service.cc:434] P 9b6542a54f61418a894d790b5e1aa779: ending tablet copy session 2e401b3aecfd46378718b182a4bec89f-0d01e5768e6f435695871abd9deaee86 on tablet 0d01e5768e6f435695871abd9deaee86 with peer 2e401b3aecfd46378718b182a4bec89f
I20260430 07:55:53.060379  5369 tablet_service.cc:1558] Processing DeleteTablet for tablet 0d01e5768e6f435695871abd9deaee86 with delete_type TABLET_DATA_TOMBSTONED (Replica with old config index 220 (current committed config index is 222)) from {username='slave'} at 127.0.0.1:47904
I20260430 07:55:53.060945  5594 tablet_replica.cc:333] stopping tablet replica
I20260430 07:55:53.061554  5594 raft_consensus.cc:2243] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 4 LEARNER]: Raft consensus shutting down.
I20260430 07:55:53.062361  5594 raft_consensus.cc:2272] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f [term 4 LEARNER]: Raft consensus is shut down!
I20260430 07:55:53.063820  5594 ts_tablet_manager.cc:1916] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20260430 07:55:53.071645  5594 ts_tablet_manager.cc:1929] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 4.221
I20260430 07:55:53.072737  5594 log.cc:1199] T 0d01e5768e6f435695871abd9deaee86 P 2e401b3aecfd46378718b182a4bec89f: Deleting WAL directory at /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/wal/wals/0d01e5768e6f435695871abd9deaee86
I20260430 07:55:53.074671  4812 catalog_manager.cc:5002] TS 2e401b3aecfd46378718b182a4bec89f (127.0.105.4:39005): tablet 0d01e5768e6f435695871abd9deaee86 (table table_a [id=ff76305e07ff4c568227a9fd62f5f0d9]) successfully deleted
I20260430 07:55:53.351186  4950 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:53.364516  5369 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
I20260430 07:55:53.370575  5088 tablet_service.cc:1467] Tablet server has 2 leaders and 0 scanners
I20260430 07:55:53.372133  5226 tablet_service.cc:1467] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |      Address       | Status
----------------------------------+--------------------+---------
 3a75644a42d64405ae1dd61a955fbcc9 | 127.0.105.62:33525 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                              Value                                                                               |      Tags       |         Master
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                              | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                             | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                             | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                             | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                               | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                              | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |       Value        |         Master
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 1 server(s) checked
 time_source         | builtin            | all 1 server(s) checked

Tablet Server Summary
               UUID               |      Address      | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+-------------------+---------+----------+----------------+-----------------
 0696ac6914f940f2bcdc99c5d5c3d0e5 | 127.0.105.1:36583 | HEALTHY | <none>   |       0        |       0
 2e401b3aecfd46378718b182a4bec89f | 127.0.105.4:39005 | HEALTHY | <none>   |       0        |       0
 9b6542a54f61418a894d790b5e1aa779 | 127.0.105.2:36961 | HEALTHY | <none>   |       2        |       0
 ea6bd19fddfe4b988f4682a0bdec2adc | 127.0.105.3:40671 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       4

Unusual flags for Tablet Server:
                  Flag                   |                                                                            Value                                                                             |         Tags         |      Tablet Server
-----------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------------
 flush_threshold_mb                      | 0                                                                                                                                                            | runtime,experimental | all 4 server(s) checked
 ipki_server_key_size                    | 768                                                                                                                                                          | experimental         | all 4 server(s) checked
 local_ip_for_outbound_sockets           | 127.0.105.1                                                                                                                                                  | experimental         | 127.0.105.1:36583
 local_ip_for_outbound_sockets           | 127.0.105.2                                                                                                                                                  | experimental         | 127.0.105.2:36961
 local_ip_for_outbound_sockets           | 127.0.105.3                                                                                                                                                  | experimental         | 127.0.105.3:40671
 local_ip_for_outbound_sockets           | 127.0.105.4                                                                                                                                                  | experimental         | 127.0.105.4:39005
 maintenance_manager_polling_interval_ms | 10                                                                                                                                                           | hidden               | all 4 server(s) checked
 never_fsync                             | true                                                                                                                                                         | unsafe,advanced      | all 4 server(s) checked
 openssl_security_level_override         | 0                                                                                                                                                            | unsafe,hidden        | all 4 server(s) checked
 rpc_server_allow_ephemeral_ports        | true                                                                                                                                                         | unsafe               | all 4 server(s) checked
 server_dump_info_format                 | pb                                                                                                                                                           | hidden               | all 4 server(s) checked
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-0/data/info.pb | hidden               | 127.0.105.1:36583
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-1/data/info.pb | hidden               | 127.0.105.2:36961
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-2/data/info.pb | hidden               | 127.0.105.3:40671
 server_dump_info_path                   | /tmp/dist-test-taskupxCjQ/test-tmp/tablet_copy-itest.0.FaultFlags_BadTabletCopyITest.TestBadCopy_1.1777535629930685-420-0/minicluster-data/ts-3/data/info.pb | hidden               | 127.0.105.4:39005
 tablet_copy_early_session_timeout_prob  | 0                                                                                                                                                            | unsafe,runtime       | all 4 server(s) checked

Flags of checked categories for Tablet Server:
        Flag         |       Value        |      Tablet Server
---------------------+--------------------+-------------------------
 builtin_ntp_servers | 127.0.105.20:38547 | all 4 server(s) checked
 time_source         | builtin            | all 4 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 5 server(s) checked

Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
  Name   | RF | Status  | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
---------+----+---------+---------------+---------+------------+------------------+-------------
 table_a | 3  | HEALTHY | 1             | 1       | 0          | 0                | 0
 table_b | 3  | HEALTHY | 1             | 1       | 0          | 0                | 0

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 1
 Median         | 2
 Third Quartile | 2
 Maximum        | 2

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 4
 Tables         | 2
 Tablets        | 2
 Replicas       | 6

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

OK
I20260430 07:55:53.448221   420 log_verifier.cc:126] Checking tablet 0d01e5768e6f435695871abd9deaee86
I20260430 07:55:53.883867   420 log_verifier.cc:177] Verified matching terms for 222 ops in tablet 0d01e5768e6f435695871abd9deaee86
I20260430 07:55:53.884439   420 log_verifier.cc:126] Checking tablet 930fc8ad15e14df2af31bad7407f95ad
I20260430 07:55:54.264976   420 log_verifier.cc:177] Verified matching terms for 207 ops in tablet 930fc8ad15e14df2af31bad7407f95ad
I20260430 07:55:54.265580   420 tablet_copy-itest.cc:1620] Checking row count...
I20260430 07:55:54.375813   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 4865
I20260430 07:55:54.396528  5009 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:54.520411   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 4865
I20260430 07:55:54.550033   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 5019
W20260430 07:55:54.550228  5042 connection.cc:570] client connection to 127.0.105.1:36583 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20260430 07:55:54.550504  5042 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20260430 07:55:54.555449  5042 consensus_peers.cc:597] T 0d01e5768e6f435695871abd9deaee86 P 9b6542a54f61418a894d790b5e1aa779 -> Peer 0696ac6914f940f2bcdc99c5d5c3d0e5 (127.0.105.1:36583): Couldn't send request to peer 0696ac6914f940f2bcdc99c5d5c3d0e5. Status: Network error: Client connection negotiation failed: client connection to 127.0.105.1:36583: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20260430 07:55:54.561653  5148 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:54.731415   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 5019
I20260430 07:55:54.761581   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 5157
I20260430 07:55:54.769104  5286 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:54.905424   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 5157
I20260430 07:55:54.930243   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 5295
I20260430 07:55:54.937808  5429 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:55.074388   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 5295
W20260430 07:55:55.093966  4813 connection.cc:570] server connection from 127.0.105.4:50511 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20260430 07:55:55.094281   420 external_mini_cluster.cc:1699] Attempting to check leaks for /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu pid 4797
I20260430 07:55:55.095386  4858 generic_service.cc:196] Checking for leaks (request via RPC)
I20260430 07:55:55.236263   420 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskupxCjQ/build/asan/bin/kudu with pid 4797
2026-04-30T07:55:55Z chronyd exiting
[       OK ] FaultFlags/BadTabletCopyITest.TestBadCopy/1 (37535 ms)
[----------] 1 test from FaultFlags/BadTabletCopyITest (37535 ms total)

[----------] Global test environment tear-down
[==========] 5 tests from 2 test suites ran. (125273 ms total)
[  PASSED  ] 4 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] TabletCopyITest.TestDownloadWalInParallelWithHeavyUpdate

 1 FAILED TEST
I20260430 07:55:55.292569   420 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/integration-tests/tablet_copy-itest.cc:1499: suppressed but not reported on 19 messages since previous log ~20 seconds ago
I20260430 07:55:55.292781   420 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/batcher.cc:441: suppressed but not reported on 3 messages since previous log ~116 seconds ago
I20260430 07:55:55.292832   420 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/meta_cache.cc:302: suppressed but not reported on 3 messages since previous log ~116 seconds ago