Diagnosed failure

AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy: WARNING: ThreadSanitizer: data race (pid=20370)  Read of size 1 at 0x7b480012a940 by thread T303 (mutexes: read M1056229661787465760):
    #0 std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::has_value() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:294:22 (libksck.so+0x11173a)
    #1 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:331:19 (libksck.so+0x111b2d)
    #2 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #3 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #4 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #5 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #6 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #7 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #8 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #9 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #10 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #11 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #12 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #13 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #14 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #15 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #16 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #17 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 1 at 0x7b480012a940 by main thread:
    #0 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:324:26 (auto_rebalancer-test+0x39259f)
    #1 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #2 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #3 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #4 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:493:3 (auto_rebalancer-test+0x369f46)
    #5 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #6 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #7 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #8 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #9 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #10 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #11 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #12 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #13 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #14 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #15 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b480012a800 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M1056229661787465760 is already destroyed.

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)


AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy: WARNING: ThreadSanitizer: data race (pid=20370)  Read of size 1 at 0x7b480012a928 by thread T303 (mutexes: read M1056229661787465760):
    #0 __is_long /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1423:39 (libc++.so.1+0xc64d4)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1881:16 (libc++.so.1+0xc64d4)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (libksck.so+0x111ba6)
    #3 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:332:13 (libksck.so+0x111b4c)
    #4 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #5 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #6 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #7 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #8 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #9 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #10 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #11 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #12 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #13 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #15 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #16 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #17 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #18 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #19 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 8 at 0x7b480012a928 by main thread:
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/string:1936:7 (auto_rebalancer-test+0x39280d)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (auto_rebalancer-test+0x392596)
    #3 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #4 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #5 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #6 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:493:3 (auto_rebalancer-test+0x369f46)
    #7 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #8 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #9 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #10 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #11 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #12 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #13 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #14 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #15 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #16 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #17 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b480012a800 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M1056229661787465760 is already destroyed.

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)


AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy: WARNING: ThreadSanitizer: data race (pid=20370)  Read of size 8 at 0x7b480012a930 by thread T303 (mutexes: read M1056229661787465760):
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string (libc++.so.1+0xc6572)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (libksck.so+0x111ba6)
    #3 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:332:13 (libksck.so+0x111b4c)
    #4 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #5 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #6 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #7 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #8 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #9 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #10 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #11 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #12 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #13 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #15 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #16 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #17 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #18 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #19 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 8 at 0x7b480012a930 by main thread:
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/string:1936:7 (auto_rebalancer-test+0x39280d)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (auto_rebalancer-test+0x392596)
    #3 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #4 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #5 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #6 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #7 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #8 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #9 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #10 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #11 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #12 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #13 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #14 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b480012a800 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M1056229661787465760 is already destroyed.

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)


AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy: WARNING: ThreadSanitizer: data race (pid=20370)  Read of size 1 at 0x7b48000749c0 by thread T303 (mutexes: read M82126):
    #0 std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::has_value() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:294:22 (libksck.so+0x11173a)
    #1 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:331:19 (libksck.so+0x111b2d)
    #2 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #3 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #4 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #5 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #6 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #7 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #8 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #9 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #10 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #11 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #12 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #13 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #14 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #15 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #16 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #17 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 1 at 0x7b48000749c0 by main thread:
    #0 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:324:26 (auto_rebalancer-test+0x39259f)
    #1 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #2 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #3 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #4 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #5 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
I20250114 20:58:05.162451 20370 test_util.cc:274] Using random seed: -784307415
    #6 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #7 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #8 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #9 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #10 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #11 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #12 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b4800074880 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M82126 (0x7b48000748a0) created at:
    #0 AnnotateRWLockCreate /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp:254 (auto_rebalancer-test+0x33558e)
    #1 kudu::rw_spinlock::rw_spinlock() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/locks.h:86:5 (libmaster.so+0x355fce)
    #2 kudu::master::TSDescriptor::TSDescriptor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:79:15 (libmaster.so+0x494651)
    #3 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler::make_shared_enabler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:57:53 (libmaster.so+0x49b659)
    #4 std::__1::__compressed_pair_elem<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, 1, false>::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, 0ul>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::__tuple_indices<0ul>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2113:9 (libmaster.so+0x49b599)
    #5 std::__1::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2197:9 (libmaster.so+0x49b284)
    #6 std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> >::__shared_ptr_emplace<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:3470:16 (libmaster.so+0x49ae9e)
    #7 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4291:26 (libmaster.so+0x49ab70)
    #8 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #9 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #10 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #11 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #12 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #13 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #15 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #16 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #17 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #18 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #19 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #20 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #21 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #22 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #23 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #24 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #25 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #26 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #27 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #28 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)


AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy: WARNING: ThreadSanitizer: data race (pid=20370)  Read of size 1 at 0x7b48000749a8 by thread T303 (mutexes: read M82126):
    #0 __is_long /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1423:39 (libc++.so.1+0xc64d4)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1881:16 (libc++.so.1+0xc64d4)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (libksck.so+0x111ba6)
    #3 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:332:13 (libksck.so+0x111b4c)
    #4 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #5 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #6 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #7 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #8 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #9 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #10 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #11 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #12 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #13 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #15 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #16 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #17 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #18 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #19 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 8 at 0x7b48000749a8 by main thread:
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/string:1936:7 (auto_rebalancer-test+0x39280d)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (auto_rebalancer-test+0x392596)
    #3 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #4 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #5 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #6 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #7 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #8 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #9 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #10 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #11 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #12 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #13 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #14 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b4800074880 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M82126 (0x7b48000748a0) created at:
    #0 AnnotateRWLockCreate /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp:254 (auto_rebalancer-test+0x33558e)
    #1 kudu::rw_spinlock::rw_spinlock() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/locks.h:86:5 (libmaster.so+0x355fce)
    #2 kudu::master::TSDescriptor::TSDescriptor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:79:15 (libmaster.so+0x494651)
    #3 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler::make_shared_enabler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:57:53 (libmaster.so+0x49b659)
    #4 std::__1::__compressed_pair_elem<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, 1, false>::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, 0ul>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::__tuple_indices<0ul>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2113:9 (libmaster.so+0x49b599)
    #5 std::__1::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2197:9 (libmaster.so+0x49b284)
    #6 std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> >::__shared_ptr_emplace<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:3470:16 (libmaster.so+0x49ae9e)
    #7 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4291:26 (libmaster.so+0x49ab70)
    #8 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #9 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #10 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #11 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #12 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #13 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #15 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #16 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #17 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #18 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #19 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #20 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #21 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #22 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #23 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #24 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #25 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #26 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #27 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #28 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

Full log

[==========] Running 14 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 14 tests from AutoRebalancerTest
[ RUN      ] AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250114 20:56:34.317225 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:33843,127.19.228.189:34671,127.19.228.188:37523
I20250114 20:56:34.318893 20370 env_posix.cc:2256] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250114 20:56:34.319913 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:34.338032 20376 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:34.338303 20377 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:34.339871 20379 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:35.470094 20378 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250114 20:56:35.470271 20370 server_base.cc:1029] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250114 20:56:35.474259 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:35.474524 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:35.474723 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888195474697 us; error 0 us; skew 500 ppm
I20250114 20:56:35.475531 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:35.481613 20370 webserver.cc:458] Webserver started at http://127.19.228.190:35159/ using document root <none> and password file <none>
I20250114 20:56:35.482479 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:35.482683 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:35.483144 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:35.489436 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "0cb5f82f3f284baeb9e765878aab52d7"
format_stamp: "Formatted at 2025-01-14 20:56:35 on dist-test-slave-kc3q"
I20250114 20:56:35.497412 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.005s	sys 0.004s
I20250114 20:56:35.503768 20385 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:35.505288 20370 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.002s	sys 0.000s
I20250114 20:56:35.505698 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "0cb5f82f3f284baeb9e765878aab52d7"
format_stamp: "Formatted at 2025-01-14 20:56:35 on dist-test-slave-kc3q"
I20250114 20:56:35.506049 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:35.569964 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:35.571902 20370 env_posix.cc:2256] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250114 20:56:35.572352 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:35.649153 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:33843
I20250114 20:56:35.649246 20436 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:33843 every 8 connection(s)
I20250114 20:56:35.655175 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:56:35.656244 20437 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
W20250114 20:56:35.661583 20439 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:35.663470 20440 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:35.666747 20442 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:35.668015 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:35.668627 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:35.668804 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:35.668953 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888195668937 us; error 0 us; skew 500 ppm
I20250114 20:56:35.669446 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:35.672823 20370 webserver.cc:458] Webserver started at http://127.19.228.189:44083/ using document root <none> and password file <none>
I20250114 20:56:35.673300 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:35.673465 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:35.673702 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:35.674713 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root/instance:
uuid: "1e91e41f6fcb4e89a0c43c650c59dd65"
format_stamp: "Formatted at 2025-01-14 20:56:35 on dist-test-slave-kc3q"
I20250114 20:56:35.680333 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.002s	sys 0.004s
I20250114 20:56:35.678753 20437 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:35.684711 20447 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:35.685911 20370 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.002s
I20250114 20:56:35.686338 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root
uuid: "1e91e41f6fcb4e89a0c43c650c59dd65"
format_stamp: "Formatted at 2025-01-14 20:56:35 on dist-test-slave-kc3q"
I20250114 20:56:35.686782 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:35.708606 20437 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:56:35.710868 20386 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.189:34671: connect: Connection refused (error 111)
I20250114 20:56:35.715281 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
W20250114 20:56:35.715411 20437 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.189:34671: Network error: Client connection negotiation failed: client connection to 127.19.228.189:34671: connect: Connection refused (error 111)
I20250114 20:56:35.716610 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:35.765204 20437 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } attempt: 1
W20250114 20:56:35.769482 20437 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.189:34671: Network error: Client connection negotiation failed: client connection to 127.19.228.189:34671: connect: Connection refused (error 111)
I20250114 20:56:35.787312 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.189:34671
I20250114 20:56:35.787456 20501 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.189:34671 every 8 connection(s)
I20250114 20:56:35.792222 20502 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:35.792277 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:35.798671 20504 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:35.800282 20502 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:56:35.801781 20505 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:35.806057 20507 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:35.806455 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:35.807168 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:35.807359 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:35.807507 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888195807489 us; error 0 us; skew 500 ppm
I20250114 20:56:35.808045 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:35.811029 20370 webserver.cc:458] Webserver started at http://127.19.228.188:35891/ using document root <none> and password file <none>
I20250114 20:56:35.811661 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:35.811899 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:35.812145 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:35.813623 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root/instance:
uuid: "fbfa5afee2ba426e804601bebd76bf83"
format_stamp: "Formatted at 2025-01-14 20:56:35 on dist-test-slave-kc3q"
I20250114 20:56:35.814370 20502 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:35.819998 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.007s	sys 0.000s
I20250114 20:56:35.825065 20514 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:35.825973 20370 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.003s
I20250114 20:56:35.826288 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root
uuid: "fbfa5afee2ba426e804601bebd76bf83"
format_stamp: "Formatted at 2025-01-14 20:56:35 on dist-test-slave-kc3q"
I20250114 20:56:35.826581 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:35.826910 20502 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:56:35.834218 20502 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:37523: Network error: Client connection negotiation failed: client connection to 127.19.228.188:37523: connect: Connection refused (error 111)
I20250114 20:56:35.837826 20437 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } attempt: 2
I20250114 20:56:35.847237 20437 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:56:35.851116 20437 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:37523: Network error: Client connection negotiation failed: client connection to 127.19.228.188:37523: connect: Connection refused (error 111)
I20250114 20:56:35.852252 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:35.853410 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:35.878398 20502 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } attempt: 1
W20250114 20:56:35.882692 20502 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:37523: Network error: Client connection negotiation failed: client connection to 127.19.228.188:37523: connect: Connection refused (error 111)
I20250114 20:56:35.882781 20437 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } attempt: 1
W20250114 20:56:35.886814 20437 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:37523: Network error: Client connection negotiation failed: client connection to 127.19.228.188:37523: connect: Connection refused (error 111)
I20250114 20:56:35.927505 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.188:37523
I20250114 20:56:35.927649 20566 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.188:37523 every 8 connection(s)
I20250114 20:56:35.932351 20567 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:35.932318 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:56:35.938181 20567 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:35.949270 20567 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:35.954309 20437 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } attempt: 2
I20250114 20:56:35.958202 20502 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } attempt: 2
I20250114 20:56:35.958647 20567 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:35.978087 20437 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: Bootstrap starting.
I20250114 20:56:35.978565 20502 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: Bootstrap starting.
I20250114 20:56:35.980041 20567 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: Bootstrap starting.
I20250114 20:56:35.985407 20502 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:35.986688 20437 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:35.986907 20567 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:35.987744 20502 log.cc:826] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: Log is configured to *not* fsync() on all Append() calls
I20250114 20:56:35.995448 20437 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: No bootstrap required, opened a new log
I20250114 20:56:35.995579 20567 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: No bootstrap required, opened a new log
I20250114 20:56:35.995666 20502 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: No bootstrap required, opened a new log
I20250114 20:56:36.016069 20567 raft_consensus.cc:357] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.015993 20437 raft_consensus.cc:357] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.015995 20502 raft_consensus.cc:357] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.016999 20567 raft_consensus.cc:383] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:36.017055 20437 raft_consensus.cc:383] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:36.017100 20502 raft_consensus.cc:383] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:36.017334 20567 raft_consensus.cc:738] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fbfa5afee2ba426e804601bebd76bf83, State: Initialized, Role: FOLLOWER
I20250114 20:56:36.017455 20437 raft_consensus.cc:738] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0cb5f82f3f284baeb9e765878aab52d7, State: Initialized, Role: FOLLOWER
I20250114 20:56:36.017539 20502 raft_consensus.cc:738] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1e91e41f6fcb4e89a0c43c650c59dd65, State: Initialized, Role: FOLLOWER
I20250114 20:56:36.018337 20437 consensus_queue.cc:260] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.018340 20567 consensus_queue.cc:260] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.018340 20502 consensus_queue.cc:260] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.022893 20437 sys_catalog.cc:564] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:56:36.023087 20578 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.023950 20578 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.024916 20502 sys_catalog.cc:564] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:56:36.025395 20577 sys_catalog.cc:455] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.026602 20577 sys_catalog.cc:458] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.026487 20576 sys_catalog.cc:455] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.027245 20567 sys_catalog.cc:564] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:56:36.027436 20576 sys_catalog.cc:458] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.054428 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
W20250114 20:56:36.069597 20598 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:56:36.069991 20598 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:56:36.077873 20391 tablet.cc:2367] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250114 20:56:36.079058 20606 catalog_manager.cc:1559] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:56:36.079295 20606 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:56:36.080889 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:56:36.082865 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:56:36.084702 20576 raft_consensus.cc:491] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:36.085251 20576 raft_consensus.cc:513] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
W20250114 20:56:36.087040 20612 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:56:36.087399 20612 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:56:36.088784 20576 leader_election.cc:290] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1e91e41f6fcb4e89a0c43c650c59dd65 (127.19.228.189:34671), fbfa5afee2ba426e804601bebd76bf83 (127.19.228.188:37523)
I20250114 20:56:36.088968 20477 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "0cb5f82f3f284baeb9e765878aab52d7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" is_pre_election: true
I20250114 20:56:36.089514 20541 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "0cb5f82f3f284baeb9e765878aab52d7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fbfa5afee2ba426e804601bebd76bf83" is_pre_election: true
I20250114 20:56:36.089838 20477 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0cb5f82f3f284baeb9e765878aab52d7 in term 0.
I20250114 20:56:36.090137 20541 raft_consensus.cc:2463] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0cb5f82f3f284baeb9e765878aab52d7 in term 0.
I20250114 20:56:36.091310 20386 leader_election.cc:304] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cb5f82f3f284baeb9e765878aab52d7, 1e91e41f6fcb4e89a0c43c650c59dd65; no voters: 
I20250114 20:56:36.092314 20576 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:36.092646 20576 raft_consensus.cc:491] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:36.092965 20576 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 0 FOLLOWER]: Advancing to term 1
W20250114 20:56:36.096940 20613 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:36.098955 20614 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.099189 20576 raft_consensus.cc:513] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.101636 20477 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "0cb5f82f3f284baeb9e765878aab52d7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65"
I20250114 20:56:36.101969 20541 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "0cb5f82f3f284baeb9e765878aab52d7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fbfa5afee2ba426e804601bebd76bf83"
I20250114 20:56:36.102182 20477 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:36.102545 20541 raft_consensus.cc:3054] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:36.108646 20477 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0cb5f82f3f284baeb9e765878aab52d7 in term 1.
I20250114 20:56:36.108896 20541 raft_consensus.cc:2463] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0cb5f82f3f284baeb9e765878aab52d7 in term 1.
I20250114 20:56:36.109747 20386 leader_election.cc:304] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cb5f82f3f284baeb9e765878aab52d7, 1e91e41f6fcb4e89a0c43c650c59dd65; no voters: 
I20250114 20:56:36.110620 20576 leader_election.cc:290] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [CANDIDATE]: Term 1 election: Requested vote from peers 1e91e41f6fcb4e89a0c43c650c59dd65 (127.19.228.189:34671), fbfa5afee2ba426e804601bebd76bf83 (127.19.228.188:37523)
I20250114 20:56:36.111182 20576 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:36.111624 20576 raft_consensus.cc:695] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 1 LEADER]: Becoming Leader. State: Replica: 0cb5f82f3f284baeb9e765878aab52d7, State: Running, Role: LEADER
W20250114 20:56:36.112936 20617 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.112529 20576 consensus_queue.cc:237] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } }
I20250114 20:56:36.117661 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:36.118675 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:36.118973 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:36.119212 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888196119185 us; error 0 us; skew 500 ppm
I20250114 20:56:36.119989 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:36.124254 20618 sys_catalog.cc:455] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 0cb5f82f3f284baeb9e765878aab52d7. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.125069 20370 webserver.cc:458] Webserver started at http://127.19.228.129:36987/ using document root <none> and password file <none>
I20250114 20:56:36.125082 20618 sys_catalog.cc:458] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:36.125818 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:36.126082 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:36.126422 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:36.127571 20622 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:56:36.127705 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "9162a38444fb4e3caf2a2017944cc097"
format_stamp: "Formatted at 2025-01-14 20:56:36 on dist-test-slave-kc3q"
I20250114 20:56:36.132987 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:56:36.134824 20622 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:56:36.138798 20624 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.140057 20370 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:56:36.140311 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "9162a38444fb4e3caf2a2017944cc097"
format_stamp: "Formatted at 2025-01-14 20:56:36 on dist-test-slave-kc3q"
I20250114 20:56:36.140583 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:36.158339 20541 raft_consensus.cc:1270] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 1 FOLLOWER]: Refusing update from remote peer 0cb5f82f3f284baeb9e765878aab52d7: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:36.159157 20477 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 1 FOLLOWER]: Refusing update from remote peer 0cb5f82f3f284baeb9e765878aab52d7: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:36.160101 20576 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [LEADER]: Connected to new peer: Peer: permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:36.161477 20576 consensus_queue.cc:1035] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:36.165241 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:36.167742 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:36.193308 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:36.194097 20578 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 0cb5f82f3f284baeb9e765878aab52d7. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.194865 20578 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.195830 20577 sys_catalog.cc:455] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 0cb5f82f3f284baeb9e765878aab52d7. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.196542 20577 sys_catalog.cc:458] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.200263 20627 mvcc.cc:204] Tried to move back new op lower bound from 7114294051446996992 to 7114294051288768512. Current Snapshot: MvccSnapshot[applied={T|T < 7114294051446996992}]
I20250114 20:56:36.204820 20630 sys_catalog.cc:455] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.205648 20630 sys_catalog.cc:458] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:36.205379 20576 sys_catalog.cc:455] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.206063 20576 sys_catalog.cc:458] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:36.207943 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:36.208243 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.208526 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:36.208734 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.209096 20577 sys_catalog.cc:455] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.209596 20577 sys_catalog.cc:458] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.211934 20622 catalog_manager.cc:1348] Generated new cluster ID: be6fbe04548a44dbba71d62b913c85fb
I20250114 20:56:36.212208 20622 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:56:36.220518 20578 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "0cb5f82f3f284baeb9e765878aab52d7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cb5f82f3f284baeb9e765878aab52d7" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 33843 } } peers { permanent_uuid: "1e91e41f6fcb4e89a0c43c650c59dd65" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 34671 } } peers { permanent_uuid: "fbfa5afee2ba426e804601bebd76bf83" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 37523 } } }
I20250114 20:56:36.221217 20578 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:36.238459 20622 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:56:36.242072 20622 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:56:36.271721 20622 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: Generated new TSK 0
I20250114 20:56:36.273054 20622 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:56:36.345577 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:33175
I20250114 20:56:36.345705 20695 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:33175 every 8 connection(s)
I20250114 20:56:36.365921 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:56:36.381197 20697 heartbeater.cc:346] Connected to a master server at 127.19.228.189:34671
I20250114 20:56:36.381839 20697 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.386456 20697 heartbeater.cc:510] Master 127.19.228.189:34671 requested a full tablet report, sending...
I20250114 20:56:36.387944 20698 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33843
I20250114 20:56:36.388312 20698 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.393239 20696 heartbeater.cc:346] Connected to a master server at 127.19.228.188:37523
I20250114 20:56:36.393829 20696 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.394090 20698 heartbeater.cc:510] Master 127.19.228.190:33843 requested a full tablet report, sending...
W20250114 20:56:36.393920 20703 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.395713 20696 heartbeater.cc:510] Master 127.19.228.188:37523 requested a full tablet report, sending...
W20250114 20:56:36.396147 20704 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.397307 20467 ts_manager.cc:194] Registered new tserver with Master: 9162a38444fb4e3caf2a2017944cc097 (127.19.228.129:33175)
I20250114 20:56:36.399030 20402 ts_manager.cc:194] Registered new tserver with Master: 9162a38444fb4e3caf2a2017944cc097 (127.19.228.129:33175)
I20250114 20:56:36.401443 20532 ts_manager.cc:194] Registered new tserver with Master: 9162a38444fb4e3caf2a2017944cc097 (127.19.228.129:33175)
W20250114 20:56:36.403431 20706 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.403720 20402 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:53796
I20250114 20:56:36.405275 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:36.407210 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:36.407399 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:36.407585 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888196407566 us; error 0 us; skew 500 ppm
I20250114 20:56:36.408030 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:36.410916 20370 webserver.cc:458] Webserver started at http://127.19.228.130:41201/ using document root <none> and password file <none>
I20250114 20:56:36.411345 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:36.411520 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:36.411813 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:36.412801 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "86cc543df1ca435488780fa7f5aad5da"
format_stamp: "Formatted at 2025-01-14 20:56:36 on dist-test-slave-kc3q"
I20250114 20:56:36.416923 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:56:36.420011 20711 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.420749 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:56:36.421001 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "86cc543df1ca435488780fa7f5aad5da"
format_stamp: "Formatted at 2025-01-14 20:56:36 on dist-test-slave-kc3q"
I20250114 20:56:36.421262 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:36.433089 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:36.434213 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:36.436867 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:36.439772 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:36.439951 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.440164 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:36.440294 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.525467 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:34851
I20250114 20:56:36.525517 20773 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:34851 every 8 connection(s)
I20250114 20:56:36.561605 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:36.572677 20781 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.572930 20774 heartbeater.cc:346] Connected to a master server at 127.19.228.188:37523
I20250114 20:56:36.573835 20774 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.574687 20774 heartbeater.cc:510] Master 127.19.228.188:37523 requested a full tablet report, sending...
I20250114 20:56:36.578244 20775 heartbeater.cc:346] Connected to a master server at 127.19.228.189:34671
I20250114 20:56:36.578579 20775 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.579089 20532 ts_manager.cc:194] Registered new tserver with Master: 86cc543df1ca435488780fa7f5aad5da (127.19.228.130:34851)
I20250114 20:56:36.579319 20775 heartbeater.cc:510] Master 127.19.228.189:34671 requested a full tablet report, sending...
I20250114 20:56:36.582286 20467 ts_manager.cc:194] Registered new tserver with Master: 86cc543df1ca435488780fa7f5aad5da (127.19.228.130:34851)
W20250114 20:56:36.587333 20782 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.589888 20776 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33843
I20250114 20:56:36.590303 20776 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.591567 20776 heartbeater.cc:510] Master 127.19.228.190:33843 requested a full tablet report, sending...
I20250114 20:56:36.592644 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:36.593667 20402 ts_manager.cc:194] Registered new tserver with Master: 86cc543df1ca435488780fa7f5aad5da (127.19.228.130:34851)
W20250114 20:56:36.593835 20784 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:36.594916 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
I20250114 20:56:36.594972 20402 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:53806
W20250114 20:56:36.595301 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:36.595571 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888196595532 us; error 0 us; skew 500 ppm
I20250114 20:56:36.596280 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:36.599028 20370 webserver.cc:458] Webserver started at http://127.19.228.131:36495/ using document root <none> and password file <none>
I20250114 20:56:36.599642 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:36.599872 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:36.600189 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:36.601527 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "4b7d8ad610204544a75702e13d05c350"
format_stamp: "Formatted at 2025-01-14 20:56:36 on dist-test-slave-kc3q"
I20250114 20:56:36.606949 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.005s	user 0.007s	sys 0.000s
I20250114 20:56:36.610782 20789 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.611569 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:56:36.611865 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "4b7d8ad610204544a75702e13d05c350"
format_stamp: "Formatted at 2025-01-14 20:56:36 on dist-test-slave-kc3q"
I20250114 20:56:36.612133 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:36.624599 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:36.625774 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:36.628365 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:36.631038 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:36.631232 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:36.631444 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:36.631623 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.001s
I20250114 20:56:36.725754 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:42565
I20250114 20:56:36.725878 20851 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:42565 every 8 connection(s)
I20250114 20:56:36.759232 20852 heartbeater.cc:346] Connected to a master server at 127.19.228.188:37523
I20250114 20:56:36.759688 20852 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.760578 20852 heartbeater.cc:510] Master 127.19.228.188:37523 requested a full tablet report, sending...
I20250114 20:56:36.761508 20853 heartbeater.cc:346] Connected to a master server at 127.19.228.189:34671
I20250114 20:56:36.761842 20853 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.762753 20853 heartbeater.cc:510] Master 127.19.228.189:34671 requested a full tablet report, sending...
I20250114 20:56:36.763655 20532 ts_manager.cc:194] Registered new tserver with Master: 4b7d8ad610204544a75702e13d05c350 (127.19.228.131:42565)
I20250114 20:56:36.765973 20855 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33843
I20250114 20:56:36.766078 20467 ts_manager.cc:194] Registered new tserver with Master: 4b7d8ad610204544a75702e13d05c350 (127.19.228.131:42565)
I20250114 20:56:36.766381 20855 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:36.767370 20855 heartbeater.cc:510] Master 127.19.228.190:33843 requested a full tablet report, sending...
I20250114 20:56:36.769280 20402 ts_manager.cc:194] Registered new tserver with Master: 4b7d8ad610204544a75702e13d05c350 (127.19.228.131:42565)
I20250114 20:56:36.769786 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.024529383s
I20250114 20:56:36.771363 20402 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:53812
I20250114 20:56:37.084766 20606 catalog_manager.cc:1260] Loaded cluster ID: be6fbe04548a44dbba71d62b913c85fb
I20250114 20:56:37.085114 20606 catalog_manager.cc:1553] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: loading cluster ID for follower catalog manager: success
I20250114 20:56:37.091344 20612 catalog_manager.cc:1260] Loaded cluster ID: be6fbe04548a44dbba71d62b913c85fb
I20250114 20:56:37.091624 20612 catalog_manager.cc:1553] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: loading cluster ID for follower catalog manager: success
I20250114 20:56:37.092012 20606 catalog_manager.cc:1575] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: acquiring CA information for follower catalog manager: success
I20250114 20:56:37.095960 20612 catalog_manager.cc:1575] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: acquiring CA information for follower catalog manager: success
I20250114 20:56:37.097304 20606 catalog_manager.cc:1603] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:56:37.099391 20612 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:56:37.406817 20698 heartbeater.cc:502] Master 127.19.228.190:33843 was elected leader, sending a full tablet report...
I20250114 20:56:37.597362 20776 heartbeater.cc:502] Master 127.19.228.190:33843 was elected leader, sending a full tablet report...
I20250114 20:56:37.773830 20855 heartbeater.cc:502] Master 127.19.228.190:33843 was elected leader, sending a full tablet report...
I20250114 20:56:38.052713 20596 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:38.321681 20370 test_util.cc:274] Using random seed: -871148041
I20250114 20:56:38.378918 20402 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:57476:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_""\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20250114 20:56:38.455785 20660 tablet_service.cc:1467] Processing CreateTablet for tablet 3abe274d326247a7a20aa491905853ff (DEFAULT_TABLE table=test-workload [id=fe3d89507f9842b1a56d8a740fa68bd9]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:56:38.456131 20739 tablet_service.cc:1467] Processing CreateTablet for tablet 66238137b8314f01af44b8a9bcc7882a (DEFAULT_TABLE table=test-workload [id=fe3d89507f9842b1a56d8a740fa68bd9]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:56:38.457826 20660 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3abe274d326247a7a20aa491905853ff. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:38.457968 20739 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 66238137b8314f01af44b8a9bcc7882a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:38.455785 20661 tablet_service.cc:1467] Processing CreateTablet for tablet 85e788cbe05941918a57a8c9efa9a74d (DEFAULT_TABLE table=test-workload [id=fe3d89507f9842b1a56d8a740fa68bd9]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:56:38.459713 20661 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 85e788cbe05941918a57a8c9efa9a74d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:38.459744 20817 tablet_service.cc:1467] Processing CreateTablet for tablet bbd124158da4488196be8a0c80a0ec8f (DEFAULT_TABLE table=test-workload [id=fe3d89507f9842b1a56d8a740fa68bd9]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:56:38.461611 20817 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet bbd124158da4488196be8a0c80a0ec8f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:38.477569 20888 tablet_bootstrap.cc:492] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097: Bootstrap starting.
I20250114 20:56:38.481653 20889 tablet_bootstrap.cc:492] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350: Bootstrap starting.
I20250114 20:56:38.484750 20890 tablet_bootstrap.cc:492] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da: Bootstrap starting.
I20250114 20:56:38.485152 20888 tablet_bootstrap.cc:654] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:38.490317 20889 tablet_bootstrap.cc:654] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:38.491392 20890 tablet_bootstrap.cc:654] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:38.492306 20888 tablet_bootstrap.cc:492] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097: No bootstrap required, opened a new log
I20250114 20:56:38.492827 20888 ts_tablet_manager.cc:1397] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097: Time spent bootstrapping tablet: real 0.016s	user 0.010s	sys 0.005s
I20250114 20:56:38.495638 20888 raft_consensus.cc:357] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.496277 20888 raft_consensus.cc:383] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:38.496651 20888 raft_consensus.cc:738] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9162a38444fb4e3caf2a2017944cc097, State: Initialized, Role: FOLLOWER
I20250114 20:56:38.497660 20889 tablet_bootstrap.cc:492] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350: No bootstrap required, opened a new log
I20250114 20:56:38.497428 20888 consensus_queue.cc:260] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.498051 20889 ts_tablet_manager.cc:1397] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350: Time spent bootstrapping tablet: real 0.017s	user 0.000s	sys 0.015s
I20250114 20:56:38.498108 20888 raft_consensus.cc:397] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:38.498535 20888 raft_consensus.cc:491] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:38.498795 20888 raft_consensus.cc:3054] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:38.499183 20890 tablet_bootstrap.cc:492] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da: No bootstrap required, opened a new log
I20250114 20:56:38.499727 20890 ts_tablet_manager.cc:1397] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da: Time spent bootstrapping tablet: real 0.015s	user 0.002s	sys 0.011s
I20250114 20:56:38.501631 20889 raft_consensus.cc:357] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4b7d8ad610204544a75702e13d05c350" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42565 } }
I20250114 20:56:38.502193 20889 raft_consensus.cc:383] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:38.502107 20890 raft_consensus.cc:357] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "86cc543df1ca435488780fa7f5aad5da" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 34851 } }
I20250114 20:56:38.502550 20889 raft_consensus.cc:738] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4b7d8ad610204544a75702e13d05c350, State: Initialized, Role: FOLLOWER
I20250114 20:56:38.502699 20890 raft_consensus.cc:383] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:38.503094 20890 raft_consensus.cc:738] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 86cc543df1ca435488780fa7f5aad5da, State: Initialized, Role: FOLLOWER
I20250114 20:56:38.503335 20889 consensus_queue.cc:260] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4b7d8ad610204544a75702e13d05c350" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42565 } }
I20250114 20:56:38.503103 20888 raft_consensus.cc:513] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.504060 20889 raft_consensus.cc:397] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:38.503970 20890 consensus_queue.cc:260] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "86cc543df1ca435488780fa7f5aad5da" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 34851 } }
I20250114 20:56:38.504359 20888 leader_election.cc:304] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9162a38444fb4e3caf2a2017944cc097; no voters: 
I20250114 20:56:38.504462 20889 raft_consensus.cc:491] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:38.504678 20890 raft_consensus.cc:397] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:38.504915 20889 raft_consensus.cc:3054] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:38.505034 20890 raft_consensus.cc:491] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:38.505512 20890 raft_consensus.cc:3054] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:38.506374 20888 leader_election.cc:290] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:38.510432 20894 raft_consensus.cc:2798] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:38.511083 20889 raft_consensus.cc:513] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4b7d8ad610204544a75702e13d05c350" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42565 } }
I20250114 20:56:38.511927 20889 leader_election.cc:304] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 4b7d8ad610204544a75702e13d05c350; no voters: 
I20250114 20:56:38.512349 20890 raft_consensus.cc:513] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "86cc543df1ca435488780fa7f5aad5da" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 34851 } }
I20250114 20:56:38.513151 20890 leader_election.cc:304] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 86cc543df1ca435488780fa7f5aad5da; no voters: 
I20250114 20:56:38.515337 20894 raft_consensus.cc:695] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 1 LEADER]: Becoming Leader. State: Replica: 9162a38444fb4e3caf2a2017944cc097, State: Running, Role: LEADER
I20250114 20:56:38.516227 20894 consensus_queue.cc:237] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.520488 20888 ts_tablet_manager.cc:1428] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097: Time spent starting tablet: real 0.027s	user 0.009s	sys 0.017s
I20250114 20:56:38.521039 20897 raft_consensus.cc:2798] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:38.521514 20889 leader_election.cc:290] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:38.522097 20888 tablet_bootstrap.cc:492] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097: Bootstrap starting.
I20250114 20:56:38.522370 20898 raft_consensus.cc:2798] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:38.522859 20890 leader_election.cc:290] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:38.529477 20888 tablet_bootstrap.cc:654] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:38.535136 20897 raft_consensus.cc:695] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 1 LEADER]: Becoming Leader. State: Replica: 4b7d8ad610204544a75702e13d05c350, State: Running, Role: LEADER
I20250114 20:56:38.544675 20898 raft_consensus.cc:695] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 1 LEADER]: Becoming Leader. State: Replica: 86cc543df1ca435488780fa7f5aad5da, State: Running, Role: LEADER
I20250114 20:56:38.545068 20889 ts_tablet_manager.cc:1428] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350: Time spent starting tablet: real 0.047s	user 0.016s	sys 0.020s
I20250114 20:56:38.539770 20401 catalog_manager.cc:5526] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 reported cstate change: term changed from 0 to 1, leader changed from <none> to 9162a38444fb4e3caf2a2017944cc097 (127.19.228.129). New cstate: current_term: 1 leader_uuid: "9162a38444fb4e3caf2a2017944cc097" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:38.551091 20890 ts_tablet_manager.cc:1428] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da: Time spent starting tablet: real 0.051s	user 0.021s	sys 0.008s
I20250114 20:56:38.546171 20897 consensus_queue.cc:237] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4b7d8ad610204544a75702e13d05c350" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42565 } }
I20250114 20:56:38.549415 20898 consensus_queue.cc:237] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "86cc543df1ca435488780fa7f5aad5da" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 34851 } }
I20250114 20:56:38.565371 20888 tablet_bootstrap.cc:492] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097: No bootstrap required, opened a new log
I20250114 20:56:38.565927 20888 ts_tablet_manager.cc:1397] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097: Time spent bootstrapping tablet: real 0.044s	user 0.012s	sys 0.011s
I20250114 20:56:38.566159 20402 catalog_manager.cc:5526] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 reported cstate change: term changed from 0 to 1, leader changed from <none> to 4b7d8ad610204544a75702e13d05c350 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "4b7d8ad610204544a75702e13d05c350" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4b7d8ad610204544a75702e13d05c350" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42565 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:38.569341 20888 raft_consensus.cc:357] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.570021 20888 raft_consensus.cc:383] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:38.570384 20888 raft_consensus.cc:738] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9162a38444fb4e3caf2a2017944cc097, State: Initialized, Role: FOLLOWER
I20250114 20:56:38.574662 20888 consensus_queue.cc:260] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.575426 20888 raft_consensus.cc:397] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:38.575887 20888 raft_consensus.cc:491] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:38.578626 20888 raft_consensus.cc:3054] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:38.604983 20400 catalog_manager.cc:5526] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da reported cstate change: term changed from 0 to 1, leader changed from <none> to 86cc543df1ca435488780fa7f5aad5da (127.19.228.130). New cstate: current_term: 1 leader_uuid: "86cc543df1ca435488780fa7f5aad5da" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "86cc543df1ca435488780fa7f5aad5da" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 34851 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:38.598112 20888 raft_consensus.cc:513] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.608108 20888 leader_election.cc:304] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9162a38444fb4e3caf2a2017944cc097; no voters: 
I20250114 20:56:38.609175 20888 leader_election.cc:290] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:38.609452 20896 raft_consensus.cc:2798] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:38.609944 20896 raft_consensus.cc:695] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 1 LEADER]: Becoming Leader. State: Replica: 9162a38444fb4e3caf2a2017944cc097, State: Running, Role: LEADER
I20250114 20:56:38.610899 20888 ts_tablet_manager.cc:1428] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097: Time spent starting tablet: real 0.045s	user 0.010s	sys 0.004s
I20250114 20:56:38.610612 20896 consensus_queue.cc:237] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } }
I20250114 20:56:38.633504 20400 catalog_manager.cc:5526] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 reported cstate change: term changed from 0 to 1, leader changed from <none> to 9162a38444fb4e3caf2a2017944cc097 (127.19.228.129). New cstate: current_term: 1 leader_uuid: "9162a38444fb4e3caf2a2017944cc097" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9162a38444fb4e3caf2a2017944cc097" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33175 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:38.678988 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:56:38.694195 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:38.694949 20370 tablet_replica.cc:331] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097: stopping tablet replica
I20250114 20:56:38.695577 20370 raft_consensus.cc:2238] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:38.696012 20370 raft_consensus.cc:2267] T 85e788cbe05941918a57a8c9efa9a74d P 9162a38444fb4e3caf2a2017944cc097 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:38.698371 20370 tablet_replica.cc:331] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097: stopping tablet replica
I20250114 20:56:38.698812 20370 raft_consensus.cc:2238] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:38.699162 20370 raft_consensus.cc:2267] T 3abe274d326247a7a20aa491905853ff P 9162a38444fb4e3caf2a2017944cc097 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:38.837965 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:56:38.846952 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:56:38.864002 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:38.864729 20370 tablet_replica.cc:331] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da: stopping tablet replica
I20250114 20:56:38.865250 20370 raft_consensus.cc:2238] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:38.865648 20370 raft_consensus.cc:2267] T 66238137b8314f01af44b8a9bcc7882a P 86cc543df1ca435488780fa7f5aad5da [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:38.884563 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:56:38.891656 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:56:38.905527 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:38.906229 20370 tablet_replica.cc:331] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350: stopping tablet replica
I20250114 20:56:38.906725 20370 raft_consensus.cc:2238] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:38.907101 20370 raft_consensus.cc:2267] T bbd124158da4488196be8a0c80a0ec8f P 4b7d8ad610204544a75702e13d05c350 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:38.925479 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:56:38.932801 20370 master.cc:537] Master@127.19.228.190:33843 shutting down...
I20250114 20:56:39.030758 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:39.031608 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:39.031932 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 0cb5f82f3f284baeb9e765878aab52d7: stopping tablet replica
I20250114 20:56:39.052441 20370 master.cc:559] Master@127.19.228.190:33843 shutdown complete.
I20250114 20:56:39.063194 20370 master.cc:537] Master@127.19.228.189:34671 shutting down...
I20250114 20:56:39.075196 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:39.075692 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:39.075976 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 1e91e41f6fcb4e89a0c43c650c59dd65: stopping tablet replica
I20250114 20:56:39.102447 20370 master.cc:559] Master@127.19.228.189:34671 shutdown complete.
I20250114 20:56:39.111474 20370 master.cc:537] Master@127.19.228.188:37523 shutting down...
I20250114 20:56:39.128654 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:39.129261 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:39.129601 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P fbfa5afee2ba426e804601bebd76bf83: stopping tablet replica
I20250114 20:56:39.146970 20370 master.cc:559] Master@127.19.228.188:37523 shutdown complete.
[       OK ] AutoRebalancerTest.OnlyLeaderDoesAutoRebalancing (4857 ms)
[ RUN      ] AutoRebalancerTest.AutoRebalancingTurnOffAndOn
I20250114 20:56:39.170622 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:38233
I20250114 20:56:39.171622 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:39.175879 20911 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.176826 20912 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.177538 20914 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:39.178400 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:39.179044 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:39.179208 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:39.179355 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888199179339 us; error 0 us; skew 500 ppm
I20250114 20:56:39.179853 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:39.181986 20370 webserver.cc:458] Webserver started at http://127.19.228.190:34953/ using document root <none> and password file <none>
I20250114 20:56:39.182395 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:39.182564 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:39.182797 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:39.183876 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "558c7e4f8e9448feb28007f5374b1df2"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.187898 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.002s	sys 0.002s
I20250114 20:56:39.190938 20919 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.191659 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:56:39.191915 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "558c7e4f8e9448feb28007f5374b1df2"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.192163 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:39.211939 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:39.212881 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:39.243752 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:38233
I20250114 20:56:39.243824 20970 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:38233 every 8 connection(s)
I20250114 20:56:39.247311 20971 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:39.255692 20971 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2: Bootstrap starting.
I20250114 20:56:39.259382 20971 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:39.262833 20971 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2: No bootstrap required, opened a new log
I20250114 20:56:39.264526 20971 raft_consensus.cc:357] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "558c7e4f8e9448feb28007f5374b1df2" member_type: VOTER }
I20250114 20:56:39.264876 20971 raft_consensus.cc:383] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:39.265045 20971 raft_consensus.cc:738] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 558c7e4f8e9448feb28007f5374b1df2, State: Initialized, Role: FOLLOWER
I20250114 20:56:39.265473 20971 consensus_queue.cc:260] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "558c7e4f8e9448feb28007f5374b1df2" member_type: VOTER }
I20250114 20:56:39.265852 20971 raft_consensus.cc:397] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:39.266028 20971 raft_consensus.cc:491] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:39.266217 20971 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:39.269834 20971 raft_consensus.cc:513] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "558c7e4f8e9448feb28007f5374b1df2" member_type: VOTER }
I20250114 20:56:39.270272 20971 leader_election.cc:304] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 558c7e4f8e9448feb28007f5374b1df2; no voters: 
I20250114 20:56:39.271422 20971 leader_election.cc:290] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:39.271735 20974 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:39.272873 20974 raft_consensus.cc:695] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 1 LEADER]: Becoming Leader. State: Replica: 558c7e4f8e9448feb28007f5374b1df2, State: Running, Role: LEADER
I20250114 20:56:39.273447 20974 consensus_queue.cc:237] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "558c7e4f8e9448feb28007f5374b1df2" member_type: VOTER }
I20250114 20:56:39.274063 20971 sys_catalog.cc:564] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:56:39.276088 20975 sys_catalog.cc:455] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "558c7e4f8e9448feb28007f5374b1df2" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "558c7e4f8e9448feb28007f5374b1df2" member_type: VOTER } }
I20250114 20:56:39.276173 20976 sys_catalog.cc:455] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 558c7e4f8e9448feb28007f5374b1df2. Latest consensus state: current_term: 1 leader_uuid: "558c7e4f8e9448feb28007f5374b1df2" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "558c7e4f8e9448feb28007f5374b1df2" member_type: VOTER } }
I20250114 20:56:39.276835 20975 sys_catalog.cc:458] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:39.276880 20976 sys_catalog.cc:458] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:39.279228 20980 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:56:39.283901 20980 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:56:39.289394 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:56:39.291474 20980 catalog_manager.cc:1348] Generated new cluster ID: eba8085470084044937a3d57a0927cdb
I20250114 20:56:39.291774 20980 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:56:39.305678 20980 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:56:39.306818 20980 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:56:39.325951 20980 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2: Generated new TSK 0
I20250114 20:56:39.326527 20980 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:56:39.355964 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:39.361020 20992 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.361938 20993 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:39.363415 20370 server_base.cc:1034] running on GCE node
W20250114 20:56:39.364176 20995 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:39.364899 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:39.365084 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:39.365226 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888199365209 us; error 0 us; skew 500 ppm
I20250114 20:56:39.365723 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:39.367893 20370 webserver.cc:458] Webserver started at http://127.19.228.129:43565/ using document root <none> and password file <none>
I20250114 20:56:39.368300 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:39.368468 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:39.368736 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:39.369725 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.373759 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.001s
I20250114 20:56:39.376740 21000 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.377395 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:56:39.377641 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.377890 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:39.393110 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:39.394013 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:39.395277 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:39.397346 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:39.397526 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.397737 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:39.397879 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.432862 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:33425
I20250114 20:56:39.432950 21062 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:33425 every 8 connection(s)
I20250114 20:56:39.436892 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:39.443838 21067 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.444537 21068 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.446717 21070 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:39.447469 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:39.448169 21063 heartbeater.cc:346] Connected to a master server at 127.19.228.190:38233
I20250114 20:56:39.448211 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:39.448534 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:39.448554 21063 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:39.448819 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888199448800 us; error 0 us; skew 500 ppm
I20250114 20:56:39.449324 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:39.449322 21063 heartbeater.cc:510] Master 127.19.228.190:38233 requested a full tablet report, sending...
I20250114 20:56:39.451200 20936 ts_manager.cc:194] Registered new tserver with Master: 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425)
I20250114 20:56:39.451943 20370 webserver.cc:458] Webserver started at http://127.19.228.130:40453/ using document root <none> and password file <none>
I20250114 20:56:39.452382 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:39.452549 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:39.452775 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:39.452881 20936 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:50986
I20250114 20:56:39.453764 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "8b7331949c9943e3a7ba51b6c070c834"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.457715 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.005s	sys 0.000s
I20250114 20:56:39.460422 21075 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.461094 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:56:39.461354 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "8b7331949c9943e3a7ba51b6c070c834"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.461604 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:39.474056 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:39.474884 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:39.476173 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:39.478145 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:39.478324 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.478528 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:39.478670 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.511898 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:46011
I20250114 20:56:39.512006 21137 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:46011 every 8 connection(s)
I20250114 20:56:39.516045 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:39.522264 21141 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.524129 21142 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:39.526391 21144 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:39.526623 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:39.527024 21138 heartbeater.cc:346] Connected to a master server at 127.19.228.190:38233
I20250114 20:56:39.527418 21138 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:39.527603 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:39.527843 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:39.528035 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888199528017 us; error 0 us; skew 500 ppm
I20250114 20:56:39.528226 21138 heartbeater.cc:510] Master 127.19.228.190:38233 requested a full tablet report, sending...
I20250114 20:56:39.528715 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:39.530184 20936 ts_manager.cc:194] Registered new tserver with Master: 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:39.531419 20370 webserver.cc:458] Webserver started at http://127.19.228.131:37461/ using document root <none> and password file <none>
I20250114 20:56:39.531618 20936 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:50998
I20250114 20:56:39.531911 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:39.532126 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:39.532371 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:39.533385 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.537199 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.005s	sys 0.000s
I20250114 20:56:39.539935 21149 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.540638 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:56:39.540884 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
format_stamp: "Formatted at 2025-01-14 20:56:39 on dist-test-slave-kc3q"
I20250114 20:56:39.541139 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:39.556618 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:39.557453 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:39.558676 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:39.560745 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:39.560930 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.561126 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:39.561267 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:39.594399 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:35827
I20250114 20:56:39.594501 21211 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:35827 every 8 connection(s)
I20250114 20:56:39.606160 21212 heartbeater.cc:346] Connected to a master server at 127.19.228.190:38233
I20250114 20:56:39.606487 21212 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:39.607174 21212 heartbeater.cc:510] Master 127.19.228.190:38233 requested a full tablet report, sending...
I20250114 20:56:39.608803 20936 ts_manager.cc:194] Registered new tserver with Master: b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:39.609216 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012029544s
I20250114 20:56:39.610000 20936 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51004
I20250114 20:56:40.455067 21063 heartbeater.cc:502] Master 127.19.228.190:38233 was elected leader, sending a full tablet report...
I20250114 20:56:40.533682 21138 heartbeater.cc:502] Master 127.19.228.190:38233 was elected leader, sending a full tablet report...
I20250114 20:56:40.612416 21212 heartbeater.cc:502] Master 127.19.228.190:38233 was elected leader, sending a full tablet report...
I20250114 20:56:40.640666 20370 test_util.cc:274] Using random seed: -868829051
I20250114 20:56:40.672863 20936 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51010:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377\017\004\001\000\376\377\377\037\004\001\000\375\377\377/\004\001\000\374\377\377?\004\001\000\373\377\377O\004\001\000\372\377\377_\004\001\000\371\377\377o""\004\001\000\377\377\377\017\004\001\000\376\377\377\037\004\001\000\375\377\377/\004\001\000\374\377\377?\004\001\000\373\377\377O\004\001\000\372\377\377_\004\001\000\371\377\377o"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:56:40.675169 20936 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:56:40.732015 21103 tablet_service.cc:1467] Processing CreateTablet for tablet f57aaf54ced04936902099c4ac2171b8 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION VALUES < 268435455
I20250114 20:56:40.733307 21103 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f57aaf54ced04936902099c4ac2171b8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.733229 21099 tablet_service.cc:1467] Processing CreateTablet for tablet 7b2ff951b26e4289aa7e5e2221df163f (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1073741820 <= VALUES < 1342177275
I20250114 20:56:40.735675 21100 tablet_service.cc:1467] Processing CreateTablet for tablet 93e308aed06f4681be0ac210581cea28 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 805306365 <= VALUES < 1073741820
I20250114 20:56:40.736754 21098 tablet_service.cc:1467] Processing CreateTablet for tablet ef19f768f7cd4f13989c35d8426f6de2 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1342177275 <= VALUES < 1610612730
I20250114 20:56:40.737696 21101 tablet_service.cc:1467] Processing CreateTablet for tablet 0fa752f989fa44768476abbe7be0207e (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 536870910 <= VALUES < 805306365
I20250114 20:56:40.738842 21027 tablet_service.cc:1467] Processing CreateTablet for tablet 6f304dd85c4046ab91e8ab8e7a8267bc (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 268435455 <= VALUES < 536870910
I20250114 20:56:40.740096 21027 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6f304dd85c4046ab91e8ab8e7a8267bc. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.739881 21102 tablet_service.cc:1467] Processing CreateTablet for tablet 6f304dd85c4046ab91e8ab8e7a8267bc (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 268435455 <= VALUES < 536870910
I20250114 20:56:40.743136 21097 tablet_service.cc:1467] Processing CreateTablet for tablet f55b0e8f77054cf4b5b26fffaed71f6d (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1610612730 <= VALUES < 1879048185
I20250114 20:56:40.744596 21023 tablet_service.cc:1467] Processing CreateTablet for tablet ef19f768f7cd4f13989c35d8426f6de2 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1342177275 <= VALUES < 1610612730
I20250114 20:56:40.744478 21099 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7b2ff951b26e4289aa7e5e2221df163f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.745806 21102 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6f304dd85c4046ab91e8ab8e7a8267bc. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.746799 21025 tablet_service.cc:1467] Processing CreateTablet for tablet 93e308aed06f4681be0ac210581cea28 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 805306365 <= VALUES < 1073741820
I20250114 20:56:40.748742 21024 tablet_service.cc:1467] Processing CreateTablet for tablet 7b2ff951b26e4289aa7e5e2221df163f (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1073741820 <= VALUES < 1342177275
I20250114 20:56:40.750093 21098 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ef19f768f7cd4f13989c35d8426f6de2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.750015 21026 tablet_service.cc:1467] Processing CreateTablet for tablet 0fa752f989fa44768476abbe7be0207e (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 536870910 <= VALUES < 805306365
I20250114 20:56:40.737713 21028 tablet_service.cc:1467] Processing CreateTablet for tablet f57aaf54ced04936902099c4ac2171b8 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION VALUES < 268435455
I20250114 20:56:40.753273 21022 tablet_service.cc:1467] Processing CreateTablet for tablet f55b0e8f77054cf4b5b26fffaed71f6d (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1610612730 <= VALUES < 1879048185
I20250114 20:56:40.755863 21101 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0fa752f989fa44768476abbe7be0207e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.759907 21100 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 93e308aed06f4681be0ac210581cea28. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.761844 21177 tablet_service.cc:1467] Processing CreateTablet for tablet f57aaf54ced04936902099c4ac2171b8 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION VALUES < 268435455
I20250114 20:56:40.763083 21177 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f57aaf54ced04936902099c4ac2171b8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.764377 21097 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f55b0e8f77054cf4b5b26fffaed71f6d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.766165 21176 tablet_service.cc:1467] Processing CreateTablet for tablet 6f304dd85c4046ab91e8ab8e7a8267bc (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 268435455 <= VALUES < 536870910
I20250114 20:56:40.766319 21175 tablet_service.cc:1467] Processing CreateTablet for tablet 0fa752f989fa44768476abbe7be0207e (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 536870910 <= VALUES < 805306365
I20250114 20:56:40.745833 21023 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ef19f768f7cd4f13989c35d8426f6de2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.768539 21174 tablet_service.cc:1467] Processing CreateTablet for tablet 93e308aed06f4681be0ac210581cea28 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 805306365 <= VALUES < 1073741820
I20250114 20:56:40.771843 21022 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f55b0e8f77054cf4b5b26fffaed71f6d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.773912 21025 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 93e308aed06f4681be0ac210581cea28. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.774866 21028 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f57aaf54ced04936902099c4ac2171b8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.775858 21026 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0fa752f989fa44768476abbe7be0207e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.779887 21024 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7b2ff951b26e4289aa7e5e2221df163f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.767441 21176 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6f304dd85c4046ab91e8ab8e7a8267bc. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.783936 21175 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0fa752f989fa44768476abbe7be0207e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.791877 21174 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 93e308aed06f4681be0ac210581cea28. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.802282 21173 tablet_service.cc:1467] Processing CreateTablet for tablet 7b2ff951b26e4289aa7e5e2221df163f (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1073741820 <= VALUES < 1342177275
I20250114 20:56:40.804770 21172 tablet_service.cc:1467] Processing CreateTablet for tablet ef19f768f7cd4f13989c35d8426f6de2 (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1342177275 <= VALUES < 1610612730
I20250114 20:56:40.806239 21173 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7b2ff951b26e4289aa7e5e2221df163f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.809032 21233 tablet_bootstrap.cc:492] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:40.811079 21172 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ef19f768f7cd4f13989c35d8426f6de2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.825335 21234 tablet_bootstrap.cc:492] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:40.830693 21233 tablet_bootstrap.cc:654] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.832077 21232 tablet_bootstrap.cc:492] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:40.836002 21234 tablet_bootstrap.cc:654] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.837432 21232 tablet_bootstrap.cc:654] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.848511 21232 tablet_bootstrap.cc:492] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:40.848963 21232 ts_tablet_manager.cc:1397] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.017s	user 0.007s	sys 0.007s
I20250114 20:56:40.851373 21232 raft_consensus.cc:357] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.852119 21232 raft_consensus.cc:383] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.852385 21232 raft_consensus.cc:738] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.853194 21232 consensus_queue.cc:260] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.858790 21175 tablet_service.cc:1467] Processing CreateTablet for tablet f55b0e8f77054cf4b5b26fffaed71f6d (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1610612730 <= VALUES < 1879048185
I20250114 20:56:40.860003 21175 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f55b0e8f77054cf4b5b26fffaed71f6d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.873342 21232 ts_tablet_manager.cc:1428] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.024s	user 0.007s	sys 0.008s
I20250114 20:56:40.874289 21232 tablet_bootstrap.cc:492] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:40.879208 21101 tablet_service.cc:1467] Processing CreateTablet for tablet eaa90c0d51d54c47b6c8bb446786b0ed (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1879048185 <= VALUES
I20250114 20:56:40.887986 21176 tablet_service.cc:1467] Processing CreateTablet for tablet eaa90c0d51d54c47b6c8bb446786b0ed (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1879048185 <= VALUES
I20250114 20:56:40.889144 21176 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet eaa90c0d51d54c47b6c8bb446786b0ed. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.889992 21028 tablet_service.cc:1467] Processing CreateTablet for tablet eaa90c0d51d54c47b6c8bb446786b0ed (DEFAULT_TABLE table=test-workload [id=c8925505764e4bd080732c13d7aa385d]), partition=RANGE (key) PARTITION 1879048185 <= VALUES
I20250114 20:56:40.891286 21028 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet eaa90c0d51d54c47b6c8bb446786b0ed. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.889235 21233 tablet_bootstrap.cc:492] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:40.894340 21233 ts_tablet_manager.cc:1397] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.086s	user 0.025s	sys 0.023s
I20250114 20:56:40.896158 21101 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet eaa90c0d51d54c47b6c8bb446786b0ed. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:40.896994 21233 raft_consensus.cc:357] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:40.897814 21233 raft_consensus.cc:383] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.898123 21233 raft_consensus.cc:738] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.902726 21233 consensus_queue.cc:260] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:40.904659 21232 tablet_bootstrap.cc:654] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.912204 21238 raft_consensus.cc:491] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:40.912730 21238 raft_consensus.cc:513] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.914599 21234 tablet_bootstrap.cc:492] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:40.915046 21234 ts_tablet_manager.cc:1397] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.090s	user 0.024s	sys 0.013s
I20250114 20:56:40.917294 21234 raft_consensus.cc:357] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:40.920070 21234 raft_consensus.cc:383] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.920338 21234 raft_consensus.cc:738] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.921017 21234 consensus_queue.cc:260] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:40.922542 21238 leader_election.cc:290] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:40.929594 21233 ts_tablet_manager.cc:1428] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.035s	user 0.013s	sys 0.009s
I20250114 20:56:40.930882 21233 tablet_bootstrap.cc:492] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:40.946062 21234 ts_tablet_manager.cc:1428] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.031s	user 0.014s	sys 0.013s
I20250114 20:56:40.946141 21233 tablet_bootstrap.cc:654] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.947083 21234 tablet_bootstrap.cc:492] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:40.955380 21232 tablet_bootstrap.cc:492] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:40.955799 21232 ts_tablet_manager.cc:1397] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.082s	user 0.008s	sys 0.017s
I20250114 20:56:40.956413 21234 tablet_bootstrap.cc:654] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.958240 21232 raft_consensus.cc:357] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.959560 21232 raft_consensus.cc:383] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.960655 21232 raft_consensus.cc:738] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.961238 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:40.961345 21232 consensus_queue.cc:260] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
W20250114 20:56:40.963251 21004 leader_election.cc:343] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Tablet error from VoteRequest() call to peer 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:40.963976 21232 ts_tablet_manager.cc:1428] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.008s	user 0.003s	sys 0.003s
I20250114 20:56:40.964880 21232 tablet_bootstrap.cc:492] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:40.967528 21234 tablet_bootstrap.cc:492] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:40.968016 21234 ts_tablet_manager.cc:1397] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.021s	user 0.010s	sys 0.004s
I20250114 20:56:40.970755 21232 tablet_bootstrap.cc:654] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.970460 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:40.970453 21234 raft_consensus.cc:357] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
W20250114 20:56:40.971936 21001 leader_election.cc:343] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Tablet error from VoteRequest() call to peer b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:40.972252 21234 raft_consensus.cc:383] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.972373 21001 leader_election.cc:304] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec; no voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff
I20250114 20:56:40.972602 21234 raft_consensus.cc:738] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.973548 21238 raft_consensus.cc:2743] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election lost for term 1. Reason: could not achieve majority
I20250114 20:56:40.973417 21234 consensus_queue.cc:260] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.975713 21234 ts_tablet_manager.cc:1428] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.007s	user 0.007s	sys 0.000s
I20250114 20:56:40.974211 21233 tablet_bootstrap.cc:492] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:40.976606 21234 tablet_bootstrap.cc:492] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:40.979487 21233 ts_tablet_manager.cc:1397] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.049s	user 0.013s	sys 0.001s
I20250114 20:56:40.982342 21234 tablet_bootstrap.cc:654] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:40.984241 21233 raft_consensus.cc:357] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.985031 21233 raft_consensus.cc:383] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.985337 21233 raft_consensus.cc:738] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.986654 21233 consensus_queue.cc:260] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.988130 21232 tablet_bootstrap.cc:492] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:40.988435 21233 ts_tablet_manager.cc:1428] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.006s	user 0.005s	sys 0.000s
I20250114 20:56:40.988572 21232 ts_tablet_manager.cc:1397] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.024s	user 0.012s	sys 0.009s
I20250114 20:56:40.989182 21233 tablet_bootstrap.cc:492] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:40.995965 21234 tablet_bootstrap.cc:492] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:40.996414 21234 ts_tablet_manager.cc:1397] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.020s	user 0.010s	sys 0.007s
I20250114 20:56:40.997233 21232 raft_consensus.cc:357] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.997885 21232 raft_consensus.cc:383] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.998171 21232 raft_consensus.cc:738] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:40.998824 21232 consensus_queue.cc:260] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.999006 21234 raft_consensus.cc:357] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:40.999661 21234 raft_consensus.cc:383] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:40.999946 21234 raft_consensus.cc:738] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.000078 21233 tablet_bootstrap.cc:654] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.001044 21232 ts_tablet_manager.cc:1428] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.012s	user 0.006s	sys 0.000s
I20250114 20:56:41.000599 21234 consensus_queue.cc:260] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.002025 21232 tablet_bootstrap.cc:492] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:41.003347 21234 ts_tablet_manager.cc:1428] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.007s	user 0.004s	sys 0.000s
I20250114 20:56:41.004271 21234 tablet_bootstrap.cc:492] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:41.009838 21234 tablet_bootstrap.cc:654] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.009838 21232 tablet_bootstrap.cc:654] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.020560 21233 tablet_bootstrap.cc:492] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:41.021019 21233 ts_tablet_manager.cc:1397] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.032s	user 0.010s	sys 0.013s
I20250114 20:56:41.023397 21233 raft_consensus.cc:357] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.023380 21234 tablet_bootstrap.cc:492] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:41.024144 21233 raft_consensus.cc:383] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.024173 21242 raft_consensus.cc:491] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.024410 21234 ts_tablet_manager.cc:1397] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.020s	user 0.010s	sys 0.009s
I20250114 20:56:41.024511 21233 raft_consensus.cc:738] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.024732 21242 raft_consensus.cc:513] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.025349 21232 tablet_bootstrap.cc:492] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:41.025816 21232 ts_tablet_manager.cc:1397] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.024s	user 0.008s	sys 0.003s
I20250114 20:56:41.025537 21233 consensus_queue.cc:260] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.026825 21234 raft_consensus.cc:357] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.027510 21234 raft_consensus.cc:383] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.028079 21234 raft_consensus.cc:738] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.028118 21232 raft_consensus.cc:357] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.028899 21232 raft_consensus.cc:383] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.029309 21232 raft_consensus.cc:738] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.028856 21234 consensus_queue.cc:260] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.029924 21232 consensus_queue.cc:260] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.032207 21232 ts_tablet_manager.cc:1428] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.006s	user 0.006s	sys 0.000s
I20250114 20:56:41.033082 21232 tablet_bootstrap.cc:492] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:41.033612 21240 raft_consensus.cc:491] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.034091 21240 raft_consensus.cc:513] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.036478 21240 leader_election.cc:290] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.039207 21232 tablet_bootstrap.cc:654] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.039911 21242 leader_election.cc:290] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.056069 21233 ts_tablet_manager.cc:1428] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.035s	user 0.001s	sys 0.007s
I20250114 20:56:41.067571 21233 tablet_bootstrap.cc:492] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:41.068440 21242 raft_consensus.cc:491] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.068894 21242 raft_consensus.cc:513] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.070724 21242 leader_election.cc:290] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.089516 21038 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.090375 21038 raft_consensus.cc:2463] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.091462 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0fa752f989fa44768476abbe7be0207e" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.092103 21234 ts_tablet_manager.cc:1428] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.067s	user 0.021s	sys 0.046s
I20250114 20:56:41.092344 21037 raft_consensus.cc:2463] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.093060 21234 tablet_bootstrap.cc:492] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:41.094452 21233 tablet_bootstrap.cc:654] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.095959 21150 leader_election.cc:304] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.096930 21150 leader_election.cc:304] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.097656 21242 raft_consensus.cc:2798] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.098731 21242 raft_consensus.cc:491] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.099027 21242 raft_consensus.cc:3054] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.100112 21254 raft_consensus.cc:2798] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.100430 21254 raft_consensus.cc:491] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.100677 21254 raft_consensus.cc:3054] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.108392 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.109836 21037 raft_consensus.cc:2463] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 8b7331949c9943e3a7ba51b6c070c834 in term 0.
I20250114 20:56:41.110697 21234 tablet_bootstrap.cc:654] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.109690 21254 raft_consensus.cc:513] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.112105 21076 leader_election.cc:304] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, 8b7331949c9943e3a7ba51b6c070c834; no voters: 
I20250114 20:56:41.114063 21240 raft_consensus.cc:2798] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.114557 21240 raft_consensus.cc:491] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.114881 21240 raft_consensus.cc:3054] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.117383 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0fa752f989fa44768476abbe7be0207e" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.117959 21037 raft_consensus.cc:3054] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.118908 21254 leader_election.cc:290] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.119807 21232 tablet_bootstrap.cc:492] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:41.120306 21232 ts_tablet_manager.cc:1397] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.087s	user 0.028s	sys 0.039s
I20250114 20:56:41.120523 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.115664 21242 raft_consensus.cc:513] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.122774 21242 leader_election.cc:290] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.124292 21112 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0fa752f989fa44768476abbe7be0207e" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.125100 21111 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0fa752f989fa44768476abbe7be0207e" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:41.123450 21038 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.126753 21038 raft_consensus.cc:3054] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
W20250114 20:56:41.129024 21153 leader_election.cc:343] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Tablet error from VoteRequest() call to peer 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.125626 21110 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
W20250114 20:56:41.130899 21153 leader_election.cc:343] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Tablet error from VoteRequest() call to peer 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.129338 21232 raft_consensus.cc:357] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.134121 21234 tablet_bootstrap.cc:492] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:41.136224 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:41.136587 21232 raft_consensus.cc:383] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.136801 21234 ts_tablet_manager.cc:1397] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.044s	user 0.003s	sys 0.014s
I20250114 20:56:41.137040 21187 raft_consensus.cc:2388] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 8b7331949c9943e3a7ba51b6c070c834 in current term 1: Already voted for candidate b135e10ff79849eda53c7a4cbd5ddaff in this term.
I20250114 20:56:41.137077 21232 raft_consensus.cc:738] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.134141 21233 tablet_bootstrap.cc:492] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:41.138115 21232 consensus_queue.cc:260] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.138939 21240 raft_consensus.cc:513] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.138239 21233 ts_tablet_manager.cc:1397] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.071s	user 0.005s	sys 0.016s
I20250114 20:56:41.136073 21038 raft_consensus.cc:2463] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.140859 21113 raft_consensus.cc:2388] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate b135e10ff79849eda53c7a4cbd5ddaff in current term 1: Already voted for candidate 8b7331949c9943e3a7ba51b6c070c834 in this term.
I20250114 20:56:41.141106 21234 raft_consensus.cc:357] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.141959 21234 raft_consensus.cc:383] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.142375 21234 raft_consensus.cc:738] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.142987 21233 raft_consensus.cc:357] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.143683 21233 raft_consensus.cc:383] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.143968 21233 raft_consensus.cc:738] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.143182 21234 consensus_queue.cc:260] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.145748 21150 leader_election.cc:304] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 8b7331949c9943e3a7ba51b6c070c834
I20250114 20:56:41.146821 21232 ts_tablet_manager.cc:1428] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.026s	user 0.005s	sys 0.003s
I20250114 20:56:41.147243 21242 raft_consensus.cc:2798] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.147734 21234 ts_tablet_manager.cc:1428] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.011s	user 0.004s	sys 0.003s
I20250114 20:56:41.148360 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
I20250114 20:56:41.147738 21232 tablet_bootstrap.cc:492] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:41.147719 21242 raft_consensus.cc:695] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Leader. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:41.149945 21242 consensus_queue.cc:237] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:41.144558 21233 consensus_queue.cc:260] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.148562 21234 tablet_bootstrap.cc:492] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:41.154035 21038 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.154568 21038 raft_consensus.cc:2388] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate 8b7331949c9943e3a7ba51b6c070c834 in current term 1: Already voted for candidate b135e10ff79849eda53c7a4cbd5ddaff in this term.
I20250114 20:56:41.146349 21240 leader_election.cc:290] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.152832 21037 raft_consensus.cc:2463] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.159149 21150 leader_election.cc:304] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 8b7331949c9943e3a7ba51b6c070c834
I20250114 20:56:41.160506 21242 raft_consensus.cc:2798] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.161100 21242 raft_consensus.cc:695] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Leader. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:41.161515 21076 leader_election.cc:304] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834; no voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff
I20250114 20:56:41.162458 21233 ts_tablet_manager.cc:1428] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.023s	user 0.005s	sys 0.004s
I20250114 20:56:41.161967 21242 consensus_queue.cc:237] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.163471 21233 tablet_bootstrap.cc:492] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:41.166364 21240 raft_consensus.cc:2743] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election lost for term 1. Reason: could not achieve majority
I20250114 20:56:41.170986 21234 tablet_bootstrap.cc:654] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.165824 20936 catalog_manager.cc:5526] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff reported cstate change: term changed from 0 to 1, leader changed from <none> to b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131). New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:41.173874 21233 tablet_bootstrap.cc:654] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.174296 21232 tablet_bootstrap.cc:654] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.175637 21242 raft_consensus.cc:491] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.176081 21242 raft_consensus.cc:513] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.178789 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "93e308aed06f4681be0ac210581cea28" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.179674 21037 raft_consensus.cc:2463] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.180717 21150 leader_election.cc:304] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.182041 21254 raft_consensus.cc:2798] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.182348 21254 raft_consensus.cc:491] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.182655 21254 raft_consensus.cc:3054] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.183053 21233 tablet_bootstrap.cc:492] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:41.183467 21233 ts_tablet_manager.cc:1397] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.020s	user 0.008s	sys 0.004s
I20250114 20:56:41.186059 21233 raft_consensus.cc:357] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.186770 21233 raft_consensus.cc:383] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.187040 21233 raft_consensus.cc:738] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.189952 21234 tablet_bootstrap.cc:492] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:41.190385 21234 ts_tablet_manager.cc:1397] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.042s	user 0.016s	sys 0.006s
I20250114 20:56:41.190521 20936 catalog_manager.cc:5526] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff reported cstate change: term changed from 0 to 1, leader changed from <none> to b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131). New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:41.190845 21233 consensus_queue.cc:260] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.191465 21254 raft_consensus.cc:513] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.194263 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "93e308aed06f4681be0ac210581cea28" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.194880 21037 raft_consensus.cc:3054] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.195298 21233 ts_tablet_manager.cc:1428] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.012s	user 0.005s	sys 0.000s
I20250114 20:56:41.196142 21233 tablet_bootstrap.cc:492] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:41.201308 21037 raft_consensus.cc:2463] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.203364 21240 raft_consensus.cc:491] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.205588 21150 leader_election.cc:304] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.203047 21234 raft_consensus.cc:357] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.208019 21234 raft_consensus.cc:383] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.208302 21234 raft_consensus.cc:738] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.208945 21234 consensus_queue.cc:260] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.204229 21233 tablet_bootstrap.cc:654] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.204162 21110 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "93e308aed06f4681be0ac210581cea28" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:41.210577 21110 raft_consensus.cc:3054] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.213864 21267 raft_consensus.cc:2798] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.217855 21232 tablet_bootstrap.cc:492] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:41.204638 21242 leader_election.cc:290] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.204845 21254 leader_election.cc:290] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.205282 21240 raft_consensus.cc:513] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.218333 21232 ts_tablet_manager.cc:1397] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.071s	user 0.005s	sys 0.024s
I20250114 20:56:41.218672 21234 ts_tablet_manager.cc:1428] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.027s	user 0.010s	sys 0.001s
I20250114 20:56:41.217595 21110 raft_consensus.cc:2463] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.219696 21240 leader_election.cc:290] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.203117 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "93e308aed06f4681be0ac210581cea28" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.220969 21113 raft_consensus.cc:2371] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader pre-election vote request: Already granted yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1. Re-sending same reply.
I20250114 20:56:41.221801 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:41.221758 21232 raft_consensus.cc:357] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.222579 21232 raft_consensus.cc:383] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.222851 21232 raft_consensus.cc:738] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
W20250114 20:56:41.223498 21076 leader_election.cc:343] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Tablet error from VoteRequest() call to peer b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.223521 21232 consensus_queue.cc:260] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.224514 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.225315 21037 raft_consensus.cc:2463] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 8b7331949c9943e3a7ba51b6c070c834 in term 0.
I20250114 20:56:41.225917 21267 raft_consensus.cc:695] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Leader. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:41.226694 21076 leader_election.cc:304] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, 8b7331949c9943e3a7ba51b6c070c834; no voters: b135e10ff79849eda53c7a4cbd5ddaff
I20250114 20:56:41.226661 21267 consensus_queue.cc:237] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.227949 21240 raft_consensus.cc:2798] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.228302 21240 raft_consensus.cc:491] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.228615 21240 raft_consensus.cc:3054] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.233150 21232 ts_tablet_manager.cc:1428] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.014s	user 0.006s	sys 0.003s
I20250114 20:56:41.234153 21232 tablet_bootstrap.cc:492] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:41.234915 21240 raft_consensus.cc:513] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.238613 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
I20250114 20:56:41.238831 21238 raft_consensus.cc:491] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.239252 21238 raft_consensus.cc:513] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
W20250114 20:56:41.239923 21076 leader_election.cc:343] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Tablet error from VoteRequest() call to peer b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.241024 21238 leader_election.cc:290] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.241099 21240 leader_election.cc:290] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.242435 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ef19f768f7cd4f13989c35d8426f6de2" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.242962 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ef19f768f7cd4f13989c35d8426f6de2" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:41.243516 21187 raft_consensus.cc:2463] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 0.
W20250114 20:56:41.243752 21004 leader_election.cc:343] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Tablet error from VoteRequest() call to peer 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.243865 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.244450 21037 raft_consensus.cc:3054] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.241029 21234 tablet_bootstrap.cc:492] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:41.244590 21001 leader_election.cc:304] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 8b7331949c9943e3a7ba51b6c070c834
I20250114 20:56:41.244793 20936 catalog_manager.cc:5526] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff reported cstate change: term changed from 0 to 1, leader changed from <none> to b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131). New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:41.245640 21238 raft_consensus.cc:2798] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.246021 21238 raft_consensus.cc:491] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.246342 21238 raft_consensus.cc:3054] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.252110 21240 raft_consensus.cc:491] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.252673 21234 tablet_bootstrap.cc:654] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.252790 21232 tablet_bootstrap.cc:654] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.252545 21240 raft_consensus.cc:513] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.255396 21038 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7b2ff951b26e4289aa7e5e2221df163f" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.255949 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7b2ff951b26e4289aa7e5e2221df163f" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:41.256594 21187 raft_consensus.cc:2463] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 8b7331949c9943e3a7ba51b6c070c834 in term 0.
W20250114 20:56:41.256768 21076 leader_election.cc:343] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Tablet error from VoteRequest() call to peer 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.257561 21076 leader_election.cc:304] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 61d77d3530a14ebaaf0a1b134e4e28ec
I20250114 20:56:41.258762 21262 raft_consensus.cc:2798] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.259152 21262 raft_consensus.cc:491] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.259457 21262 raft_consensus.cc:3054] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.257550 21240 leader_election.cc:290] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.266420 21037 raft_consensus.cc:2463] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 8b7331949c9943e3a7ba51b6c070c834 in term 1.
I20250114 20:56:41.267237 21076 leader_election.cc:304] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, 8b7331949c9943e3a7ba51b6c070c834; no voters: b135e10ff79849eda53c7a4cbd5ddaff
I20250114 20:56:41.271708 21233 tablet_bootstrap.cc:492] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:41.272171 21233 ts_tablet_manager.cc:1397] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.076s	user 0.019s	sys 0.010s
I20250114 20:56:41.272706 21234 tablet_bootstrap.cc:492] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:41.273065 21234 ts_tablet_manager.cc:1397] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.032s	user 0.016s	sys 0.000s
I20250114 20:56:41.274907 21238 raft_consensus.cc:513] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.277827 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ef19f768f7cd4f13989c35d8426f6de2" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
I20250114 20:56:41.276131 21234 raft_consensus.cc:357] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.278425 21187 raft_consensus.cc:3054] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.279316 21238 leader_election.cc:290] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 election: Requested vote from peers 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.279531 21234 raft_consensus.cc:383] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.280166 21234 raft_consensus.cc:738] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.280622 21262 raft_consensus.cc:513] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.280879 21234 consensus_queue.cc:260] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.283906 21234 ts_tablet_manager.cc:1428] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.011s	user 0.008s	sys 0.000s
I20250114 20:56:41.284857 21234 tablet_bootstrap.cc:492] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff: Bootstrap starting.
I20250114 20:56:41.285373 21187 raft_consensus.cc:2463] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 1.
I20250114 20:56:41.278440 21233 raft_consensus.cc:357] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.286767 21233 raft_consensus.cc:383] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.287127 21233 raft_consensus.cc:738] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.273626 21240 raft_consensus.cc:2798] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.288431 21001 leader_election.cc:304] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.288341 21233 consensus_queue.cc:260] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.289777 21238 raft_consensus.cc:2798] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.290254 21238 raft_consensus.cc:695] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 LEADER]: Becoming Leader. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Running, Role: LEADER
I20250114 20:56:41.290783 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7b2ff951b26e4289aa7e5e2221df163f" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.290959 21238 consensus_queue.cc:237] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.291690 21233 ts_tablet_manager.cc:1428] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.019s	user 0.008s	sys 0.000s
W20250114 20:56:41.288300 20984 auto_rebalancer.cc:227] Could not retrieve cluster info: Service unavailable: Tablet not running
I20250114 20:56:41.289831 20989 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:56:41.294106 21262 leader_election.cc:290] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.292575 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ef19f768f7cd4f13989c35d8426f6de2" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:41.295291 20989 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:41.295568 21113 raft_consensus.cc:3054] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
W20250114 20:56:41.296473 21076 leader_election.cc:343] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Tablet error from VoteRequest() call to peer 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425): Illegal state: must be running to vote when last-logged opid is not known
I20250114 20:56:41.299893 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7b2ff951b26e4289aa7e5e2221df163f" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
I20250114 20:56:41.300408 21187 raft_consensus.cc:3054] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.301746 21232 tablet_bootstrap.cc:492] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:41.301899 21233 tablet_bootstrap.cc:492] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:41.302390 21232 ts_tablet_manager.cc:1397] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.068s	user 0.017s	sys 0.007s
I20250114 20:56:41.303912 21267 raft_consensus.cc:491] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.304320 21267 raft_consensus.cc:513] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.305423 21232 raft_consensus.cc:357] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.305909 21232 raft_consensus.cc:383] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.306078 21232 raft_consensus.cc:738] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.307085 21110 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.307188 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f55b0e8f77054cf4b5b26fffaed71f6d" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.306914 21232 consensus_queue.cc:260] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.308470 21037 raft_consensus.cc:2388] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate b135e10ff79849eda53c7a4cbd5ddaff in current term 1: Already voted for candidate 8b7331949c9943e3a7ba51b6c070c834 in this term.
I20250114 20:56:41.309484 21232 ts_tablet_manager.cc:1428] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.007s	user 0.001s	sys 0.004s
I20250114 20:56:41.306123 21267 leader_election.cc:290] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.310299 21232 tablet_bootstrap.cc:492] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec: Bootstrap starting.
I20250114 20:56:41.311178 21113 raft_consensus.cc:2463] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 1.
I20250114 20:56:41.315335 21187 raft_consensus.cc:2463] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 8b7331949c9943e3a7ba51b6c070c834 in term 1.
I20250114 20:56:41.318681 21076 leader_election.cc:304] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 61d77d3530a14ebaaf0a1b134e4e28ec
I20250114 20:56:41.319136 21234 tablet_bootstrap.cc:654] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.319763 21274 raft_consensus.cc:2798] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.320681 21240 raft_consensus.cc:695] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 1 LEADER]: Becoming Leader. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Running, Role: LEADER
I20250114 20:56:41.321125 21274 raft_consensus.cc:695] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 1 LEADER]: Becoming Leader. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Running, Role: LEADER
I20250114 20:56:41.321472 21240 consensus_queue.cc:237] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.322794 21233 tablet_bootstrap.cc:654] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.322103 20934 catalog_manager.cc:5526] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec reported cstate change: term changed from 0 to 1, leader changed from <none> to 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129). New cstate: current_term: 1 leader_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:41.323238 21274 consensus_queue.cc:237] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.329384 21232 tablet_bootstrap.cc:654] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.344164 21153 leader_election.cc:304] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: b135e10ff79849eda53c7a4cbd5ddaff; no voters: 61d77d3530a14ebaaf0a1b134e4e28ec, 8b7331949c9943e3a7ba51b6c070c834
I20250114 20:56:41.347249 20936 catalog_manager.cc:5526] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "8b7331949c9943e3a7ba51b6c070c834" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:41.349603 21267 raft_consensus.cc:3054] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.363924 21267 raft_consensus.cc:2743] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader pre-election lost for term 1. Reason: could not achieve majority
I20250114 20:56:41.366715 21232 tablet_bootstrap.cc:492] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec: No bootstrap required, opened a new log
I20250114 20:56:41.367172 21232 ts_tablet_manager.cc:1397] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent bootstrapping tablet: real 0.057s	user 0.009s	sys 0.017s
I20250114 20:56:41.368219 21234 tablet_bootstrap.cc:492] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff: No bootstrap required, opened a new log
I20250114 20:56:41.368649 21234 ts_tablet_manager.cc:1397] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff: Time spent bootstrapping tablet: real 0.084s	user 0.009s	sys 0.018s
I20250114 20:56:41.369451 21232 raft_consensus.cc:357] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.370108 21232 raft_consensus.cc:383] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.370352 21232 raft_consensus.cc:738] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.370921 21232 consensus_queue.cc:260] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.371440 21234 raft_consensus.cc:357] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.371937 21233 tablet_bootstrap.cc:492] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:41.372151 21234 raft_consensus.cc:383] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.372403 21234 raft_consensus.cc:738] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.372606 21233 ts_tablet_manager.cc:1397] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.071s	user 0.008s	sys 0.005s
I20250114 20:56:41.373374 21232 ts_tablet_manager.cc:1428] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec: Time spent starting tablet: real 0.006s	user 0.006s	sys 0.000s
I20250114 20:56:41.373844 21234 consensus_queue.cc:260] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.375025 21233 raft_consensus.cc:357] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.375859 21234 ts_tablet_manager.cc:1428] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff: Time spent starting tablet: real 0.007s	user 0.007s	sys 0.001s
I20250114 20:56:41.376261 21233 raft_consensus.cc:383] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.376502 21233 raft_consensus.cc:738] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.377141 21233 consensus_queue.cc:260] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.383396 21233 ts_tablet_manager.cc:1428] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.010s	user 0.005s	sys 0.000s
I20250114 20:56:41.384428 21233 tablet_bootstrap.cc:492] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834: Bootstrap starting.
I20250114 20:56:41.390404 21233 tablet_bootstrap.cc:654] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:41.390513 20934 catalog_manager.cc:5526] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 reported cstate change: term changed from 0 to 1, leader changed from <none> to 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "8b7331949c9943e3a7ba51b6c070c834" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:41.394856 21233 tablet_bootstrap.cc:492] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834: No bootstrap required, opened a new log
I20250114 20:56:41.395233 21233 ts_tablet_manager.cc:1397] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834: Time spent bootstrapping tablet: real 0.011s	user 0.007s	sys 0.002s
I20250114 20:56:41.396845 21233 raft_consensus.cc:357] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.397271 21233 raft_consensus.cc:383] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:41.397457 21233 raft_consensus.cc:738] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8b7331949c9943e3a7ba51b6c070c834, State: Initialized, Role: FOLLOWER
I20250114 20:56:41.397892 21233 consensus_queue.cc:260] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.399664 21233 ts_tablet_manager.cc:1428] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834: Time spent starting tablet: real 0.004s	user 0.003s	sys 0.003s
I20250114 20:56:41.400182 21267 raft_consensus.cc:491] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.401059 21267 raft_consensus.cc:513] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.402478 21267 leader_election.cc:290] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.402873 21267 raft_consensus.cc:491] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.403098 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "eaa90c0d51d54c47b6c8bb446786b0ed" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.403273 21267 raft_consensus.cc:513] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.403738 21037 raft_consensus.cc:2463] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.403671 21110 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "eaa90c0d51d54c47b6c8bb446786b0ed" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.404444 21110 raft_consensus.cc:2463] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.404934 21150 leader_election.cc:304] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.405990 21267 leader_election.cc:290] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.406466 21110 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.407130 21110 raft_consensus.cc:2463] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.407196 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.408114 21153 leader_election.cc:304] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.408566 21037 raft_consensus.cc:2463] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 0.
I20250114 20:56:41.409045 21267 raft_consensus.cc:2798] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.409399 21267 raft_consensus.cc:491] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.409711 21267 raft_consensus.cc:3054] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.406215 21269 raft_consensus.cc:2798] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:41.410909 21269 raft_consensus.cc:491] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:41.411159 21269 raft_consensus.cc:3054] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.414796 21267 raft_consensus.cc:513] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.415717 21269 raft_consensus.cc:513] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.416194 21267 leader_election.cc:290] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.416886 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.417322 21037 raft_consensus.cc:3054] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.417279 21110 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:41.418013 21110 raft_consensus.cc:3054] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.419083 21038 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "eaa90c0d51d54c47b6c8bb446786b0ed" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
I20250114 20:56:41.419668 21038 raft_consensus.cc:3054] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.420115 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "eaa90c0d51d54c47b6c8bb446786b0ed" candidate_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:41.420663 21113 raft_consensus.cc:3054] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.423332 21037 raft_consensus.cc:2463] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.423884 21110 raft_consensus.cc:2463] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.424844 21153 leader_election.cc:304] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.425026 21038 raft_consensus.cc:2463] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.425599 21267 raft_consensus.cc:2798] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.425967 21267 raft_consensus.cc:695] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Leader. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:41.426085 21113 raft_consensus.cc:2463] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b135e10ff79849eda53c7a4cbd5ddaff in term 1.
I20250114 20:56:41.426826 21267 consensus_queue.cc:237] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.427027 21153 leader_election.cc:304] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff; no voters: 
I20250114 20:56:41.427889 21269 leader_election.cc:290] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [CANDIDATE]: Term 1 election: Requested vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:41.427944 21268 raft_consensus.cc:2798] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:41.428508 21268 raft_consensus.cc:695] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Leader. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:41.429155 21268 consensus_queue.cc:237] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.434247 20934 catalog_manager.cc:5526] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff reported cstate change: term changed from 0 to 1, leader changed from <none> to b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131). New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:41.444804 20934 catalog_manager.cc:5526] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff reported cstate change: term changed from 0 to 1, leader changed from <none> to b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131). New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:41.483350 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:41.488626 21285 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:41.489179 21286 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:41.491252 20370 server_base.cc:1034] running on GCE node
W20250114 20:56:41.491705 21288 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:41.492527 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:41.492722 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:41.492853 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888201492842 us; error 0 us; skew 500 ppm
I20250114 20:56:41.493263 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:41.495424 20370 webserver.cc:458] Webserver started at http://127.19.228.132:34855/ using document root <none> and password file <none>
I20250114 20:56:41.495857 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:41.496012 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:41.496207 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:41.497153 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "cc27e96d9e434834830dd90b18d4f132"
format_stamp: "Formatted at 2025-01-14 20:56:41 on dist-test-slave-kc3q"
I20250114 20:56:41.501127 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:56:41.503952 21293 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:41.504606 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:56:41.504848 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "cc27e96d9e434834830dd90b18d4f132"
format_stamp: "Formatted at 2025-01-14 20:56:41 on dist-test-slave-kc3q"
I20250114 20:56:41.505086 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingTurnOffAndOn.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:41.520484 21238 raft_consensus.cc:491] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.520927 21238 raft_consensus.cc:513] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.522522 21238 leader_election.cc:290] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.523272 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7b2ff951b26e4289aa7e5e2221df163f" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8b7331949c9943e3a7ba51b6c070c834" is_pre_election: true
I20250114 20:56:41.523572 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7b2ff951b26e4289aa7e5e2221df163f" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:41.524253 21187 raft_consensus.cc:2388] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 61d77d3530a14ebaaf0a1b134e4e28ec in current term 1: Already voted for candidate 8b7331949c9943e3a7ba51b6c070c834 in this term.
I20250114 20:56:41.525413 21001 leader_election.cc:304] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec; no voters: 8b7331949c9943e3a7ba51b6c070c834, b135e10ff79849eda53c7a4cbd5ddaff
I20250114 20:56:41.526019 21238 raft_consensus.cc:3054] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.530069 21238 raft_consensus.cc:2743] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Leader pre-election lost for term 1. Reason: could not achieve majority
I20250114 20:56:41.537645 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:41.538623 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:41.539777 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:41.541738 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:41.541932 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:41.542200 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:41.542348 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:41.584424 21268 consensus_queue.cc:1035] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.584918 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:41383
I20250114 20:56:41.585062 21355 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:41383 every 8 connection(s)
I20250114 20:56:41.585609 21274 raft_consensus.cc:491] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:41.586025 21274 raft_consensus.cc:513] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:41.588191 21274 leader_election.cc:290] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:41.589370 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0fa752f989fa44768476abbe7be0207e" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" is_pre_election: true
I20250114 20:56:41.589370 21037 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0fa752f989fa44768476abbe7be0207e" candidate_uuid: "8b7331949c9943e3a7ba51b6c070c834" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" is_pre_election: true
I20250114 20:56:41.591167 21076 leader_election.cc:304] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 8b7331949c9943e3a7ba51b6c070c834; no voters: 61d77d3530a14ebaaf0a1b134e4e28ec, b135e10ff79849eda53c7a4cbd5ddaff
I20250114 20:56:41.592200 21274 raft_consensus.cc:2743] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Leader pre-election lost for term 1. Reason: could not achieve majority
I20250114 20:56:41.608693 21113 raft_consensus.cc:3054] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:41.616406 21267 consensus_queue.cc:1035] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.616365 21268 consensus_queue.cc:1035] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:56:41.621141 21356 heartbeater.cc:346] Connected to a master server at 127.19.228.190:38233
I20250114 20:56:41.621546 21356 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:41.622370 21356 heartbeater.cc:510] Master 127.19.228.190:38233 requested a full tablet report, sending...
I20250114 20:56:41.624560 20936 ts_manager.cc:194] Registered new tserver with Master: cc27e96d9e434834830dd90b18d4f132 (127.19.228.132:41383)
I20250114 20:56:41.626407 20936 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:51012
I20250114 20:56:41.638214 21269 consensus_queue.cc:1035] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.647138 21268 consensus_queue.cc:1035] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.656177 21267 consensus_queue.cc:1035] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.710968 21238 consensus_queue.cc:1035] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:56:41.720770 21238 consensus_queue.cc:1035] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.834329 21240 consensus_queue.cc:1035] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.843569 21373 consensus_queue.cc:1035] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.856139 21373 consensus_queue.cc:1035] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.865893 21274 consensus_queue.cc:1035] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.876358 21268 consensus_queue.cc:1035] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:41.885705 21269 consensus_queue.cc:1035] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:42.018679 21269 consensus_queue.cc:1035] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:42.028223 21254 consensus_queue.cc:1035] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:42.629431 21356 heartbeater.cc:502] Master 127.19.228.190:38233 was elected leader, sending a full tablet report...
I20250114 20:56:43.296192 20989 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:56:43.310631 21187 tablet_service.cc:1967] Received LeaderStepDown RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc"
dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
mode: GRACEFUL
new_leader_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
 from {username='slave'} at 127.0.0.1:60224
I20250114 20:56:43.311177 21187 raft_consensus.cc:604] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Received request to transfer leadership to TS 61d77d3530a14ebaaf0a1b134e4e28ec
I20250114 20:56:43.315483 21187 tablet_service.cc:1967] Received LeaderStepDown RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8"
dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
mode: GRACEFUL
new_leader_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
 from {username='slave'} at 127.0.0.1:60224
I20250114 20:56:43.316090 21187 raft_consensus.cc:604] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Received request to transfer leadership to TS 61d77d3530a14ebaaf0a1b134e4e28ec
I20250114 20:56:43.317095 20989 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 2
I20250114 20:56:43.317562 20989 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:43.534195 21254 raft_consensus.cc:988] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: : Instructing follower 61d77d3530a14ebaaf0a1b134e4e28ec to start an election
I20250114 20:56:43.534562 21267 raft_consensus.cc:1076] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Signalling peer 61d77d3530a14ebaaf0a1b134e4e28ec to start an election
I20250114 20:56:43.535880 21038 tablet_service.cc:1939] Received Run Leader Election RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8"
dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
 from {username='slave'} at 127.0.0.1:43334
I20250114 20:56:43.536365 21038 raft_consensus.cc:491] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250114 20:56:43.536636 21038 raft_consensus.cc:3054] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:43.540575 21038 raft_consensus.cc:513] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:43.542862 21038 leader_election.cc:290] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 2 election: Requested vote from peers b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827), 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011)
I20250114 20:56:43.542764 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
I20250114 20:56:43.543064 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f57aaf54ced04936902099c4ac2171b8" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:43.543529 21187 raft_consensus.cc:3049] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Stepping down as leader of term 1
I20250114 20:56:43.543628 21113 raft_consensus.cc:3054] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:43.544600 21187 raft_consensus.cc:738] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:43.545217 21187 consensus_queue.cc:260] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:43.546072 21187 raft_consensus.cc:3054] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:43.547881 21113 raft_consensus.cc:2463] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 2.
I20250114 20:56:43.548719 21004 leader_election.cc:304] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, 8b7331949c9943e3a7ba51b6c070c834; no voters: 
I20250114 20:56:43.549306 21382 raft_consensus.cc:2798] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:56:43.550089 21187 raft_consensus.cc:2463] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 2.
I20250114 20:56:43.550189 21382 raft_consensus.cc:695] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 LEADER]: Becoming Leader. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Running, Role: LEADER
I20250114 20:56:43.551061 21382 consensus_queue.cc:237] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } }
I20250114 20:56:43.557147 20936 catalog_manager.cc:5526] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec reported cstate change: term changed from 1 to 2, leader changed from b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131) to 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129). New cstate: current_term: 2 leader_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:43.577440 21254 raft_consensus.cc:988] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: : Instructing follower 61d77d3530a14ebaaf0a1b134e4e28ec to start an election
I20250114 20:56:43.577782 21267 raft_consensus.cc:1076] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Signalling peer 61d77d3530a14ebaaf0a1b134e4e28ec to start an election
I20250114 20:56:43.578923 21038 tablet_service.cc:1939] Received Run Leader Election RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc"
dest_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec"
 from {username='slave'} at 127.0.0.1:43334
I20250114 20:56:43.579370 21038 raft_consensus.cc:491] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250114 20:56:43.579663 21038 raft_consensus.cc:3054] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:43.583632 21038 raft_consensus.cc:513] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:43.585017 21038 leader_election.cc:290] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 2 election: Requested vote from peers 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011), b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131:35827)
I20250114 20:56:43.585744 21113 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "8b7331949c9943e3a7ba51b6c070c834"
I20250114 20:56:43.585891 21187 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "6f304dd85c4046ab91e8ab8e7a8267bc" candidate_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "b135e10ff79849eda53c7a4cbd5ddaff"
I20250114 20:56:43.586346 21113 raft_consensus.cc:3054] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:43.586382 21187 raft_consensus.cc:3049] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Stepping down as leader of term 1
I20250114 20:56:43.586880 21187 raft_consensus.cc:738] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Becoming Follower/Learner. State: Replica: b135e10ff79849eda53c7a4cbd5ddaff, State: Running, Role: LEADER
I20250114 20:56:43.587641 21187 consensus_queue.cc:260] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:43.588814 21187 raft_consensus.cc:3054] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:43.592329 21113 raft_consensus.cc:2463] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 2.
I20250114 20:56:43.593276 21004 leader_election.cc:304] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 61d77d3530a14ebaaf0a1b134e4e28ec, 8b7331949c9943e3a7ba51b6c070c834; no voters: 
I20250114 20:56:43.593876 21382 raft_consensus.cc:2798] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:56:43.594333 21382 raft_consensus.cc:695] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 LEADER]: Becoming Leader. State: Replica: 61d77d3530a14ebaaf0a1b134e4e28ec, State: Running, Role: LEADER
I20250114 20:56:43.594475 21187 raft_consensus.cc:2463] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 61d77d3530a14ebaaf0a1b134e4e28ec in term 2.
I20250114 20:56:43.595100 21382 consensus_queue.cc:237] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } }
I20250114 20:56:43.601178 20936 catalog_manager.cc:5526] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec reported cstate change: term changed from 1 to 2, leader changed from b135e10ff79849eda53c7a4cbd5ddaff (127.19.228.131) to 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129). New cstate: current_term: 2 leader_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:43.977828 21187 raft_consensus.cc:1270] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Refusing update from remote peer 61d77d3530a14ebaaf0a1b134e4e28ec: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:56:43.978852 21382 consensus_queue.cc:1035] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:43.988510 21113 raft_consensus.cc:1270] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Refusing update from remote peer 61d77d3530a14ebaaf0a1b134e4e28ec: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:56:43.990252 21382 consensus_queue.cc:1035] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:44.022387 21113 raft_consensus.cc:1270] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Refusing update from remote peer 61d77d3530a14ebaaf0a1b134e4e28ec: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:56:44.023459 21382 consensus_queue.cc:1035] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:44.030958 21187 raft_consensus.cc:1270] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Refusing update from remote peer 61d77d3530a14ebaaf0a1b134e4e28ec: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:56:44.032636 21385 consensus_queue.cc:1035] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:45.318498 20989 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:56:45.323926 20989 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:56:45.324311 20989 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:47.325135 20989 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:56:47.327194 21038 consensus_queue.cc:237] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } }
I20250114 20:56:47.332183 20989 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:56:47.332901 20989 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:47.335983 21187 raft_consensus.cc:1270] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Refusing update from remote peer 61d77d3530a14ebaaf0a1b134e4e28ec: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.337924 21406 consensus_queue.cc:1035] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250114 20:56:47.338815 21113 raft_consensus.cc:1270] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Refusing update from remote peer 61d77d3530a14ebaaf0a1b134e4e28ec: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.340528 21394 consensus_queue.cc:1035] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.003s
I20250114 20:56:47.360142 21394 raft_consensus.cc:2949] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.362160 21187 raft_consensus.cc:2949] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
W20250114 20:56:47.366622 21002 consensus_peers.cc:487] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec -> Peer cc27e96d9e434834830dd90b18d4f132 (127.19.228.132:41383): Couldn't send request to peer cc27e96d9e434834830dd90b18d4f132. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: ef19f768f7cd4f13989c35d8426f6de2. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:47.369661 21113 raft_consensus.cc:2949] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.375027 20936 catalog_manager.cc:5526] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff reported cstate change: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.387140 21187 consensus_queue.cc:237] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } attrs { replace: true } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } }
I20250114 20:56:47.397359 21113 raft_consensus.cc:1270] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Refusing update from remote peer b135e10ff79849eda53c7a4cbd5ddaff: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.397940 21038 raft_consensus.cc:1270] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Refusing update from remote peer b135e10ff79849eda53c7a4cbd5ddaff: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.399441 21410 consensus_queue.cc:1035] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250114 20:56:47.400918 21267 consensus_queue.cc:1035] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:56:47.407271 21151 consensus_peers.cc:487] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff -> Peer cc27e96d9e434834830dd90b18d4f132 (127.19.228.132:41383): Couldn't send request to peer cc27e96d9e434834830dd90b18d4f132. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 93e308aed06f4681be0ac210581cea28. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:47.413843 21410 raft_consensus.cc:2949] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } attrs { replace: true } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.415588 21110 raft_consensus.cc:2949] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } attrs { replace: true } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.417313 21038 raft_consensus.cc:2949] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } attrs { replace: true } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.429905 20934 catalog_manager.cc:5526] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec reported cstate change: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } attrs { replace: true } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.431131 21187 consensus_queue.cc:237] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } attrs { replace: true } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } }
I20250114 20:56:47.438374 21110 raft_consensus.cc:1270] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Refusing update from remote peer b135e10ff79849eda53c7a4cbd5ddaff: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
W20250114 20:56:47.438607 21151 consensus_peers.cc:487] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff -> Peer cc27e96d9e434834830dd90b18d4f132 (127.19.228.132:41383): Couldn't send request to peer cc27e96d9e434834830dd90b18d4f132. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: eaa90c0d51d54c47b6c8bb446786b0ed. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:47.438759 21038 raft_consensus.cc:1270] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Refusing update from remote peer b135e10ff79849eda53c7a4cbd5ddaff: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.439638 21410 consensus_queue.cc:1035] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:47.440657 21267 consensus_queue.cc:1035] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:47.448576 21267 raft_consensus.cc:2949] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } attrs { replace: true } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.450932 21038 raft_consensus.cc:2949] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } attrs { replace: true } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.451830 21110 raft_consensus.cc:2949] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } attrs { replace: true } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.461938 20935 catalog_manager.cc:5526] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec reported cstate change: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } attrs { replace: true } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.472431 21110 consensus_queue.cc:237] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } }
I20250114 20:56:47.480899 21187 raft_consensus.cc:1270] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Refusing update from remote peer 8b7331949c9943e3a7ba51b6c070c834: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.482448 21274 consensus_queue.cc:1035] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250114 20:56:47.488109 21038 raft_consensus.cc:1270] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Refusing update from remote peer 8b7331949c9943e3a7ba51b6c070c834: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:47.489847 21274 consensus_queue.cc:1035] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [LEADER]: Connected to new peer: Peer: permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:56:47.490846 21077 consensus_peers.cc:487] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 -> Peer cc27e96d9e434834830dd90b18d4f132 (127.19.228.132:41383): Couldn't send request to peer cc27e96d9e434834830dd90b18d4f132. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: f55b0e8f77054cf4b5b26fffaed71f6d. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:47.492365 21274 raft_consensus.cc:2949] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.494251 21187 raft_consensus.cc:2949] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.499819 21038 raft_consensus.cc:2949] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } } }
I20250114 20:56:47.506803 20936 catalog_manager.cc:5526] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 reported cstate change: config changed from index -1 to 2, NON_VOTER cc27e96d9e434834830dd90b18d4f132 (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "8b7331949c9943e3a7ba51b6c070c834" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "61d77d3530a14ebaaf0a1b134e4e28ec" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 33425 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8b7331949c9943e3a7ba51b6c070c834" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 46011 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b135e10ff79849eda53c7a4cbd5ddaff" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 35827 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cc27e96d9e434834830dd90b18d4f132" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41383 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:47.627394 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
W20250114 20:56:47.636505 20920 proxy.cc:239] Call had error, refreshing address and retrying: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer [suppressed 5 similar messages]
W20250114 20:56:47.641000 20984 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:56:47.641320 20984 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
I20250114 20:56:47.651925 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:47.652798 20370 tablet_replica.cc:331] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.653446 20370 raft_consensus.cc:2238] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.653856 20370 raft_consensus.cc:2267] T 7b2ff951b26e4289aa7e5e2221df163f P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.655938 20370 tablet_replica.cc:331] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.656415 20370 raft_consensus.cc:2238] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.657199 20370 raft_consensus.cc:2267] T 6f304dd85c4046ab91e8ab8e7a8267bc P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.658975 20370 tablet_replica.cc:331] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.659649 20370 raft_consensus.cc:2238] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.660558 20370 raft_consensus.cc:2267] T ef19f768f7cd4f13989c35d8426f6de2 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.662792 20370 tablet_replica.cc:331] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.663300 20370 raft_consensus.cc:2238] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.663844 20370 raft_consensus.cc:2267] T eaa90c0d51d54c47b6c8bb446786b0ed P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.666134 20370 tablet_replica.cc:331] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.666676 20370 raft_consensus.cc:2238] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.667302 20370 raft_consensus.cc:2267] T f57aaf54ced04936902099c4ac2171b8 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.669194 20370 tablet_replica.cc:331] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.669644 20370 raft_consensus.cc:2238] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.670176 20370 raft_consensus.cc:2267] T 93e308aed06f4681be0ac210581cea28 P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.672657 20370 tablet_replica.cc:331] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.673094 20370 raft_consensus.cc:2238] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.673468 20370 raft_consensus.cc:2267] T 0fa752f989fa44768476abbe7be0207e P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.675292 20370 tablet_replica.cc:331] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec: stopping tablet replica
I20250114 20:56:47.675853 20370 raft_consensus.cc:2238] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.676227 20370 raft_consensus.cc:2267] T f55b0e8f77054cf4b5b26fffaed71f6d P 61d77d3530a14ebaaf0a1b134e4e28ec [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.695828 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:56:47.715101 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
W20250114 20:56:47.720453 20984 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:56:47.720696 20984 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:56:47.734180 21153 consensus_peers.cc:487] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff -> Peer 8b7331949c9943e3a7ba51b6c070c834 (127.19.228.130:46011): Couldn't send request to peer 8b7331949c9943e3a7ba51b6c070c834. Status: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:47.742791 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:47.743459 20370 tablet_replica.cc:331] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.744217 20370 raft_consensus.cc:2238] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.744692 20370 raft_consensus.cc:2267] T 0fa752f989fa44768476abbe7be0207e P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.746974 20370 tablet_replica.cc:331] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.747710 20370 raft_consensus.cc:2238] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.748247 20370 raft_consensus.cc:2267] T ef19f768f7cd4f13989c35d8426f6de2 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus is shut down!
W20250114 20:56:47.748529 21150 consensus_peers.cc:487] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff -> Peer 61d77d3530a14ebaaf0a1b134e4e28ec (127.19.228.129:33425): Couldn't send request to peer 61d77d3530a14ebaaf0a1b134e4e28ec. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:33425: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:47.750412 20370 tablet_replica.cc:331] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.750985 20370 raft_consensus.cc:2238] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.751403 20370 raft_consensus.cc:2267] T 6f304dd85c4046ab91e8ab8e7a8267bc P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.753214 20370 tablet_replica.cc:331] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.753800 20370 raft_consensus.cc:2238] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.754489 20370 raft_consensus.cc:2267] T 7b2ff951b26e4289aa7e5e2221df163f P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.756431 20370 tablet_replica.cc:331] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.756903 20370 raft_consensus.cc:2238] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.757803 20370 raft_consensus.cc:2267] T f55b0e8f77054cf4b5b26fffaed71f6d P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.759999 20370 tablet_replica.cc:331] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.760514 20370 raft_consensus.cc:2238] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.761011 20370 raft_consensus.cc:2267] T 93e308aed06f4681be0ac210581cea28 P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.763156 20370 tablet_replica.cc:331] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.763931 20370 raft_consensus.cc:2238] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.764407 20370 raft_consensus.cc:2267] T eaa90c0d51d54c47b6c8bb446786b0ed P 8b7331949c9943e3a7ba51b6c070c834 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.766691 20370 tablet_replica.cc:331] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834: stopping tablet replica
I20250114 20:56:47.767207 20370 raft_consensus.cc:2238] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.767607 20370 raft_consensus.cc:2267] T f57aaf54ced04936902099c4ac2171b8 P 8b7331949c9943e3a7ba51b6c070c834 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.795666 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:56:47.813464 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
W20250114 20:56:47.818603 20984 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:56:47.818893 20984 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:56:47.824726 20984 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:56:47.825181 20984 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
I20250114 20:56:47.836534 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:47.837205 20370 tablet_replica.cc:331] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.837786 20370 raft_consensus.cc:2238] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.838224 20370 raft_consensus.cc:2267] T f55b0e8f77054cf4b5b26fffaed71f6d P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.840206 20370 tablet_replica.cc:331] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.840709 20370 raft_consensus.cc:2238] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.841117 20370 raft_consensus.cc:2267] T ef19f768f7cd4f13989c35d8426f6de2 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.843619 20370 tablet_replica.cc:331] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.844115 20370 raft_consensus.cc:2238] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.844993 20370 raft_consensus.cc:2267] T 93e308aed06f4681be0ac210581cea28 P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.846997 20370 tablet_replica.cc:331] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.847432 20370 raft_consensus.cc:2238] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.848134 20370 raft_consensus.cc:2267] T 0fa752f989fa44768476abbe7be0207e P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.849612 20370 tablet_replica.cc:331] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.850051 20370 raft_consensus.cc:2238] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.850404 20370 raft_consensus.cc:2267] T 7b2ff951b26e4289aa7e5e2221df163f P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.851951 20370 tablet_replica.cc:331] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.852376 20370 raft_consensus.cc:2238] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.852743 20370 raft_consensus.cc:2267] T 6f304dd85c4046ab91e8ab8e7a8267bc P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.854308 20370 tablet_replica.cc:331] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.854830 20370 raft_consensus.cc:2238] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.855731 20370 raft_consensus.cc:2267] T eaa90c0d51d54c47b6c8bb446786b0ed P b135e10ff79849eda53c7a4cbd5ddaff [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.858070 20370 tablet_replica.cc:331] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff: stopping tablet replica
I20250114 20:56:47.858444 20370 raft_consensus.cc:2238] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:47.858800 20370 raft_consensus.cc:2267] T f57aaf54ced04936902099c4ac2171b8 P b135e10ff79849eda53c7a4cbd5ddaff [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.882685 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:56:47.898023 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:56:47.914300 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:47.929353 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
I20250114 20:56:47.936334 20370 master.cc:537] Master@127.19.228.190:38233 shutting down...
I20250114 20:56:47.950114 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:47.950585 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:47.950866 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 558c7e4f8e9448feb28007f5374b1df2: stopping tablet replica
I20250114 20:56:47.968446 20370 master.cc:559] Master@127.19.228.190:38233 shutdown complete.
[       OK ] AutoRebalancerTest.AutoRebalancingTurnOffAndOn (8831 ms)
[ RUN      ] AutoRebalancerTest.NextLeaderResumesAutoRebalancing
I20250114 20:56:48.001852 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:40947,127.19.228.189:38113,127.19.228.188:46881
I20250114 20:56:48.003031 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:48.008162 21435 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.008766 21436 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.009374 21438 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.010378 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:48.011082 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:48.011267 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:48.011411 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888208011393 us; error 0 us; skew 500 ppm
I20250114 20:56:48.011874 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:48.014062 20370 webserver.cc:458] Webserver started at http://127.19.228.190:44279/ using document root <none> and password file <none>
I20250114 20:56:48.014510 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:48.014671 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:48.014919 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:48.016000 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "78c1f705eafc4517abe78ea33eb1ad7f"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.020404 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.001s	sys 0.003s
I20250114 20:56:48.023342 21443 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.024032 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:56:48.024268 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "78c1f705eafc4517abe78ea33eb1ad7f"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.024507 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:48.048413 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:48.049511 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:48.083153 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:40947
I20250114 20:56:48.083231 21494 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:40947 every 8 connection(s)
I20250114 20:56:48.086742 21495 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:48.086990 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:48.092216 21497 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.093189 21498 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.093520 21495 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.096086 20370 server_base.cc:1034] running on GCE node
W20250114 20:56:48.096657 21500 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.097613 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:48.097874 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:48.098066 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888208098049 us; error 0 us; skew 500 ppm
I20250114 20:56:48.098660 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:48.101099 20370 webserver.cc:458] Webserver started at http://127.19.228.189:46219/ using document root <none> and password file <none>
I20250114 20:56:48.101657 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:48.101881 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:48.102175 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:48.103435 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root/instance:
uuid: "dac1e1a6988a438dbdc36d89518321dc"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.106182 21495 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.108111 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
W20250114 20:56:48.110836 21495 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.189:38113: Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
I20250114 20:56:48.111855 21508 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.112540 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.002s
I20250114 20:56:48.112790 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root
uuid: "dac1e1a6988a438dbdc36d89518321dc"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.113034 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:48.130728 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:48.131803 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:48.164227 21495 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } attempt: 1
I20250114 20:56:48.164583 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.189:38113
I20250114 20:56:48.164752 21559 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.189:38113 every 8 connection(s)
I20250114 20:56:48.169176 21561 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:48.169302 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:48.175506 21563 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.175696 21495 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.179629 21561 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } has no permanent_uuid. Determining permanent_uuid...
W20250114 20:56:48.180478 21564 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.181553 21495 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:46881: Network error: Client connection negotiation failed: client connection to 127.19.228.188:46881: connect: Connection refused (error 111)
W20250114 20:56:48.184022 21566 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.186084 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:48.186950 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:48.187143 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:48.187284 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888208187269 us; error 0 us; skew 500 ppm
I20250114 20:56:48.187777 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:48.191421 20370 webserver.cc:458] Webserver started at http://127.19.228.188:37709/ using document root <none> and password file <none>
I20250114 20:56:48.191450 21561 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.191946 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:48.192139 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:48.192391 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:48.193537 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root/instance:
uuid: "b7bc7ab1eb254f8eb346cd7c929c8176"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.198503 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250114 20:56:48.200879 21561 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.201972 21572 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.202739 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:56:48.203004 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root
uuid: "b7bc7ab1eb254f8eb346cd7c929c8176"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.203330 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/master-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
W20250114 20:56:48.205473 21561 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:46881: Network error: Client connection negotiation failed: client connection to 127.19.228.188:46881: connect: Connection refused (error 111)
I20250114 20:56:48.224239 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:48.225401 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:48.229044 21495 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } attempt: 1
W20250114 20:56:48.233390 21495 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:46881: Network error: Client connection negotiation failed: client connection to 127.19.228.188:46881: connect: Connection refused (error 111)
I20250114 20:56:48.241930 21561 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } attempt: 1
W20250114 20:56:48.245790 21561 consensus_peers.cc:646] Error getting permanent uuid from config peer 127.19.228.188:46881: Network error: Client connection negotiation failed: client connection to 127.19.228.188:46881: connect: Connection refused (error 111)
I20250114 20:56:48.262704 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.188:46881
I20250114 20:56:48.262795 21624 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.188:46881 every 8 connection(s)
I20250114 20:56:48.265537 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:56:48.266173 21625 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:48.271286 21625 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.277783 21495 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } attempt: 2
I20250114 20:56:48.281536 21625 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.290877 21625 sys_catalog.cc:422] member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } has no permanent_uuid. Determining permanent_uuid...
I20250114 20:56:48.294370 21495 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: Bootstrap starting.
I20250114 20:56:48.299731 21495 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:48.304297 21561 consensus_peers.cc:656] Retrying to get permanent uuid for remote peer: member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } attempt: 2
I20250114 20:56:48.304522 21495 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: No bootstrap required, opened a new log
I20250114 20:56:48.306545 21625 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: Bootstrap starting.
I20250114 20:56:48.308151 21495 raft_consensus.cc:357] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.308902 21495 raft_consensus.cc:383] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:48.309180 21495 raft_consensus.cc:738] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 78c1f705eafc4517abe78ea33eb1ad7f, State: Initialized, Role: FOLLOWER
I20250114 20:56:48.309841 21495 consensus_queue.cc:260] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.311977 21630 sys_catalog.cc:455] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.312698 21625 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:48.312837 21630 sys_catalog.cc:458] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.313980 21495 sys_catalog.cc:564] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:56:48.318084 21625 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: No bootstrap required, opened a new log
I20250114 20:56:48.320001 21561 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc: Bootstrap starting.
I20250114 20:56:48.320816 21625 raft_consensus.cc:357] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.321535 21625 raft_consensus.cc:383] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:48.321807 21625 raft_consensus.cc:738] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b7bc7ab1eb254f8eb346cd7c929c8176, State: Initialized, Role: FOLLOWER
I20250114 20:56:48.322490 21625 consensus_queue.cc:260] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.325567 21561 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:48.327205 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 1
I20250114 20:56:48.328615 21639 sys_catalog.cc:455] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.329538 21639 sys_catalog.cc:458] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.330044 21625 sys_catalog.cc:564] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: configured and running, proceeding with master startup.
W20250114 20:56:48.331933 21643 catalog_manager.cc:1559] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:56:48.332233 21643 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:56:48.333825 21561 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc: No bootstrap required, opened a new log
I20250114 20:56:48.336241 21561 raft_consensus.cc:357] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.336740 21561 raft_consensus.cc:383] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:48.336974 21561 raft_consensus.cc:738] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dac1e1a6988a438dbdc36d89518321dc, State: Initialized, Role: FOLLOWER
I20250114 20:56:48.337531 21561 consensus_queue.cc:260] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.339458 21651 sys_catalog.cc:455] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 0 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.340308 21651 sys_catalog.cc:458] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.341042 21561 sys_catalog.cc:564] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: configured and running, proceeding with master startup.
W20250114 20:56:48.347318 21656 catalog_manager.cc:1559] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:56:48.347626 21656 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
I20250114 20:56:48.351840 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 2
I20250114 20:56:48.353123 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:48.354636 21667 catalog_manager.cc:1559] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc: loading cluster ID for follower catalog manager: Not found: cluster ID entry not found
W20250114 20:56:48.354905 21667 catalog_manager.cc:874] Not found: cluster ID entry not found: failed to prepare follower catalog manager, will retry
W20250114 20:56:48.358968 21668 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.359853 21669 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.360558 21671 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.360999 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:48.361752 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:48.362087 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:48.362246 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888208362227 us; error 0 us; skew 500 ppm
I20250114 20:56:48.362705 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:48.364845 20370 webserver.cc:458] Webserver started at http://127.19.228.129:40901/ using document root <none> and password file <none>
I20250114 20:56:48.365267 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:48.365439 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:48.365679 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:48.366744 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "b605d082f85044548a7e3d36d909f645"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.370941 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.000s	sys 0.006s
I20250114 20:56:48.373943 21676 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.374655 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:56:48.374929 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "b605d082f85044548a7e3d36d909f645"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.375192 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:48.395320 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:48.396512 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:48.397855 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:48.400030 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:48.400219 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.400430 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:48.400575 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.441121 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:36051
I20250114 20:56:48.441217 21738 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:36051 every 8 connection(s)
I20250114 20:56:48.462106 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:56:48.463201 21739 heartbeater.cc:346] Connected to a master server at 127.19.228.188:46881
I20250114 20:56:48.463604 21739 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.464398 21739 heartbeater.cc:510] Master 127.19.228.188:46881 requested a full tablet report, sending...
I20250114 20:56:48.464468 21740 heartbeater.cc:346] Connected to a master server at 127.19.228.189:38113
I20250114 20:56:48.464896 21740 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.465657 21740 heartbeater.cc:510] Master 127.19.228.189:38113 requested a full tablet report, sending...
I20250114 20:56:48.467456 21590 ts_manager.cc:194] Registered new tserver with Master: b605d082f85044548a7e3d36d909f645 (127.19.228.129:36051)
I20250114 20:56:48.469890 21525 ts_manager.cc:194] Registered new tserver with Master: b605d082f85044548a7e3d36d909f645 (127.19.228.129:36051)
I20250114 20:56:48.471611 21741 heartbeater.cc:346] Connected to a master server at 127.19.228.190:40947
I20250114 20:56:48.471972 21741 heartbeater.cc:463] Registering TS with master...
W20250114 20:56:48.472129 21746 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.472981 21741 heartbeater.cc:510] Master 127.19.228.190:40947 requested a full tablet report, sending...
W20250114 20:56:48.475299 21747 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.475839 21460 ts_manager.cc:194] Registered new tserver with Master: b605d082f85044548a7e3d36d909f645 (127.19.228.129:36051)
I20250114 20:56:48.476913 20370 server_base.cc:1034] running on GCE node
W20250114 20:56:48.477906 21749 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.478775 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:48.478976 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:48.479131 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888208479114 us; error 0 us; skew 500 ppm
I20250114 20:56:48.479595 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:48.481786 20370 webserver.cc:458] Webserver started at http://127.19.228.130:42003/ using document root <none> and password file <none>
I20250114 20:56:48.482221 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:48.482384 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:48.482622 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:48.483721 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "25587d7d11014e408f78502f9a391100"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.487819 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.003s	sys 0.001s
I20250114 20:56:48.490691 21754 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.491362 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.000s	sys 0.003s
I20250114 20:56:48.491652 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "25587d7d11014e408f78502f9a391100"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.491905 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:48.511883 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:48.512995 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:48.514313 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:48.516729 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:48.516922 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.517124 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:48.517267 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.519731 21651 raft_consensus.cc:491] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:48.520174 21651 raft_consensus.cc:513] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.521822 21651 leader_election.cc:290] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 78c1f705eafc4517abe78ea33eb1ad7f (127.19.228.190:40947), b7bc7ab1eb254f8eb346cd7c929c8176 (127.19.228.188:46881)
I20250114 20:56:48.522729 21470 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "dac1e1a6988a438dbdc36d89518321dc" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" is_pre_election: true
I20250114 20:56:48.523017 21600 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "dac1e1a6988a438dbdc36d89518321dc" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" is_pre_election: true
I20250114 20:56:48.523655 21470 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate dac1e1a6988a438dbdc36d89518321dc in term 0.
I20250114 20:56:48.523764 21600 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate dac1e1a6988a438dbdc36d89518321dc in term 0.
I20250114 20:56:48.525321 21511 leader_election.cc:304] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f, dac1e1a6988a438dbdc36d89518321dc; no voters: 
I20250114 20:56:48.526037 21651 raft_consensus.cc:2798] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:48.526378 21651 raft_consensus.cc:491] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:48.526633 21651 raft_consensus.cc:3054] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:48.556529 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:37369
I20250114 20:56:48.556684 21816 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:37369 every 8 connection(s)
I20250114 20:56:48.565557 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:56:48.577303 21817 heartbeater.cc:346] Connected to a master server at 127.19.228.188:46881
I20250114 20:56:48.577775 21817 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.578663 21817 heartbeater.cc:510] Master 127.19.228.188:46881 requested a full tablet report, sending...
I20250114 20:56:48.581750 21818 heartbeater.cc:346] Connected to a master server at 127.19.228.189:38113
I20250114 20:56:48.582180 21818 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.583134 21818 heartbeater.cc:510] Master 127.19.228.189:38113 requested a full tablet report, sending...
I20250114 20:56:48.584329 21651 raft_consensus.cc:513] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.586079 21590 ts_manager.cc:194] Registered new tserver with Master: 25587d7d11014e408f78502f9a391100 (127.19.228.130:37369)
I20250114 20:56:48.586403 21651 leader_election.cc:290] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [CANDIDATE]: Term 1 election: Requested vote from peers 78c1f705eafc4517abe78ea33eb1ad7f (127.19.228.190:40947), b7bc7ab1eb254f8eb346cd7c929c8176 (127.19.228.188:46881)
I20250114 20:56:48.588286 21470 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "dac1e1a6988a438dbdc36d89518321dc" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "78c1f705eafc4517abe78ea33eb1ad7f"
I20250114 20:56:48.588950 21470 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:48.592846 21525 ts_manager.cc:194] Registered new tserver with Master: 25587d7d11014e408f78502f9a391100 (127.19.228.130:37369)
I20250114 20:56:48.595279 21600 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "dac1e1a6988a438dbdc36d89518321dc" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176"
I20250114 20:56:48.595897 21600 raft_consensus.cc:3054] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:48.600402 21819 heartbeater.cc:346] Connected to a master server at 127.19.228.190:40947
I20250114 20:56:48.600805 21819 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.601702 21819 heartbeater.cc:510] Master 127.19.228.190:40947 requested a full tablet report, sending...
W20250114 20:56:48.603214 21824 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:48.604640 21825 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.606072 20370 server_base.cc:1034] running on GCE node
W20250114 20:56:48.606660 21827 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:48.607425 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:48.607640 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:48.607784 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888208607773 us; error 0 us; skew 500 ppm
I20250114 20:56:48.608192 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:48.610394 20370 webserver.cc:458] Webserver started at http://127.19.228.131:44869/ using document root <none> and password file <none>
I20250114 20:56:48.610870 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:48.611125 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:48.611356 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:48.612370 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "ef56de0536214aeeb98e26b2658c5766"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.616510 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:56:48.619431 21832 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.619858 21470 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate dac1e1a6988a438dbdc36d89518321dc in term 1.
I20250114 20:56:48.619879 21600 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate dac1e1a6988a438dbdc36d89518321dc in term 1.
I20250114 20:56:48.620530 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:56:48.620939 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "ef56de0536214aeeb98e26b2658c5766"
format_stamp: "Formatted at 2025-01-14 20:56:48 on dist-test-slave-kc3q"
I20250114 20:56:48.621266 21511 leader_election.cc:304] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f, dac1e1a6988a438dbdc36d89518321dc; no voters: 
I20250114 20:56:48.621304 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NextLeaderResumesAutoRebalancing.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:48.622458 21651 raft_consensus.cc:2798] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:48.622851 21460 ts_manager.cc:194] Registered new tserver with Master: 25587d7d11014e408f78502f9a391100 (127.19.228.130:37369)
I20250114 20:56:48.625101 21651 raft_consensus.cc:695] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 1 LEADER]: Becoming Leader. State: Replica: dac1e1a6988a438dbdc36d89518321dc, State: Running, Role: LEADER
I20250114 20:56:48.626179 21651 consensus_queue.cc:237] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:48.633527 21833 sys_catalog.cc:455] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: SysCatalogTable state changed. Reason: New leader dac1e1a6988a438dbdc36d89518321dc. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.634095 21833 sys_catalog.cc:458] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:48.635684 21839 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:56:48.642905 21839 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:56:48.651439 21600 raft_consensus.cc:1270] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Refusing update from remote peer dac1e1a6988a438dbdc36d89518321dc: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:48.651738 21470 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Refusing update from remote peer dac1e1a6988a438dbdc36d89518321dc: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:56:48.652686 21833 consensus_queue.cc:1035] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [LEADER]: Connected to new peer: Peer: permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:48.653340 21651 consensus_queue.cc:1035] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [LEADER]: Connected to new peer: Peer: permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:48.660825 21630 sys_catalog.cc:455] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: SysCatalogTable state changed. Reason: New leader dac1e1a6988a438dbdc36d89518321dc. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.661614 21630 sys_catalog.cc:458] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.664196 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:48.663980 21639 sys_catalog.cc:455] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: SysCatalogTable state changed. Reason: New leader dac1e1a6988a438dbdc36d89518321dc. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.664705 21639 sys_catalog.cc:458] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.665974 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:48.669051 21630 sys_catalog.cc:455] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.669770 21630 sys_catalog.cc:458] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.673133 21651 sys_catalog.cc:455] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.673949 21651 sys_catalog.cc:458] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:48.675391 21839 catalog_manager.cc:1348] Generated new cluster ID: 3b3edf3d1e7c4404aadc9cdf7bc24cff
I20250114 20:56:48.675756 21839 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:56:48.677222 21639 sys_catalog.cc:455] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.677942 21639 sys_catalog.cc:458] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:48.679141 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:48.679930 21833 sys_catalog.cc:455] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 1 leader_uuid: "dac1e1a6988a438dbdc36d89518321dc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:48.680446 21833 sys_catalog.cc:458] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:48.682039 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:48.682308 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.682559 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:48.682765 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:48.726454 21839 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:56:48.728955 21839 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:56:48.740929 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:37273
I20250114 20:56:48.741075 21906 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:37273 every 8 connection(s)
I20250114 20:56:48.752558 21839 catalog_manager.cc:5899] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc: Generated new TSK 0
I20250114 20:56:48.753369 21839 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:56:48.761966 21907 heartbeater.cc:346] Connected to a master server at 127.19.228.188:46881
I20250114 20:56:48.762516 21907 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.763516 21907 heartbeater.cc:510] Master 127.19.228.188:46881 requested a full tablet report, sending...
I20250114 20:56:48.766605 21590 ts_manager.cc:194] Registered new tserver with Master: ef56de0536214aeeb98e26b2658c5766 (127.19.228.131:37273)
I20250114 20:56:48.771308 21909 heartbeater.cc:346] Connected to a master server at 127.19.228.189:38113
I20250114 20:56:48.771665 21909 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.772506 21909 heartbeater.cc:510] Master 127.19.228.189:38113 requested a full tablet report, sending...
I20250114 20:56:48.774860 21525 ts_manager.cc:194] Registered new tserver with Master: ef56de0536214aeeb98e26b2658c5766 (127.19.228.131:37273)
I20250114 20:56:48.776432 21911 heartbeater.cc:346] Connected to a master server at 127.19.228.190:40947
I20250114 20:56:48.776819 21911 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:48.776985 21525 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:47188
I20250114 20:56:48.777746 21911 heartbeater.cc:510] Master 127.19.228.190:40947 requested a full tablet report, sending...
I20250114 20:56:48.779968 21460 ts_manager.cc:194] Registered new tserver with Master: ef56de0536214aeeb98e26b2658c5766 (127.19.228.131:37273)
I20250114 20:56:48.780936 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.011415375s
I20250114 20:56:49.336673 21643 catalog_manager.cc:1260] Loaded cluster ID: 3b3edf3d1e7c4404aadc9cdf7bc24cff
I20250114 20:56:49.336922 21643 catalog_manager.cc:1553] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: loading cluster ID for follower catalog manager: success
I20250114 20:56:49.341082 21643 catalog_manager.cc:1575] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: acquiring CA information for follower catalog manager: success
I20250114 20:56:49.344167 21643 catalog_manager.cc:1603] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:56:49.351132 21656 catalog_manager.cc:1260] Loaded cluster ID: 3b3edf3d1e7c4404aadc9cdf7bc24cff
I20250114 20:56:49.351394 21656 catalog_manager.cc:1553] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: loading cluster ID for follower catalog manager: success
I20250114 20:56:49.355497 21656 catalog_manager.cc:1575] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: acquiring CA information for follower catalog manager: success
I20250114 20:56:49.358482 21656 catalog_manager.cc:1603] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: importing token verification keys for follower catalog manager: success; most recent TSK sequence number 0
I20250114 20:56:49.474011 21525 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:47170
I20250114 20:56:49.599134 21525 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:47180
I20250114 20:56:49.779414 21909 heartbeater.cc:502] Master 127.19.228.189:38113 was elected leader, sending a full tablet report...
I20250114 20:56:50.351771 21666 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:50.476094 21740 heartbeater.cc:502] Master 127.19.228.189:38113 was elected leader, sending a full tablet report...
I20250114 20:56:50.601220 21818 heartbeater.cc:502] Master 127.19.228.189:38113 was elected leader, sending a full tablet report...
I20250114 20:56:50.851420 20370 test_util.cc:274] Using random seed: -858618297
I20250114 20:56:50.879400 21525 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47192:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
  rows: "\004\001\000\252\252\252*\004\001\000TUUU""\004\001\000\252\252\252*\004\001\000TUUU"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20250114 20:56:50.942983 21871 tablet_service.cc:1467] Processing CreateTablet for tablet 3c8976f2d9f348399cb5330c271b82ba (DEFAULT_TABLE table=test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a]), partition=RANGE (key) PARTITION VALUES < 715827882
I20250114 20:56:50.944602 21871 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3c8976f2d9f348399cb5330c271b82ba. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:50.953975 21704 tablet_service.cc:1467] Processing CreateTablet for tablet d5e8b2c663664c98af6e5effc1c5b272 (DEFAULT_TABLE table=test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a]), partition=RANGE (key) PARTITION 715827882 <= VALUES < 1431655764
I20250114 20:56:50.955266 21704 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d5e8b2c663664c98af6e5effc1c5b272. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:50.958115 21782 tablet_service.cc:1467] Processing CreateTablet for tablet 4b631ac95b1049d09ad265a6e8cb9ac3 (DEFAULT_TABLE table=test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a]), partition=RANGE (key) PARTITION 1431655764 <= VALUES
I20250114 20:56:50.959235 21782 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 4b631ac95b1049d09ad265a6e8cb9ac3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:50.959463 21942 tablet_bootstrap.cc:492] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766: Bootstrap starting.
I20250114 20:56:50.965278 21942 tablet_bootstrap.cc:654] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:50.968386 21943 tablet_bootstrap.cc:492] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645: Bootstrap starting.
I20250114 20:56:50.971148 21942 tablet_bootstrap.cc:492] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766: No bootstrap required, opened a new log
I20250114 20:56:50.971648 21942 ts_tablet_manager.cc:1397] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766: Time spent bootstrapping tablet: real 0.012s	user 0.012s	sys 0.000s
I20250114 20:56:50.974267 21943 tablet_bootstrap.cc:654] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:50.974418 21942 raft_consensus.cc:357] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ef56de0536214aeeb98e26b2658c5766" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37273 } }
I20250114 20:56:50.975066 21942 raft_consensus.cc:383] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:50.975349 21942 raft_consensus.cc:738] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ef56de0536214aeeb98e26b2658c5766, State: Initialized, Role: FOLLOWER
I20250114 20:56:50.976034 21942 consensus_queue.cc:260] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ef56de0536214aeeb98e26b2658c5766" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37273 } }
I20250114 20:56:50.976774 21942 raft_consensus.cc:397] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:50.977171 21942 raft_consensus.cc:491] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:50.977561 21942 raft_consensus.cc:3054] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:50.978607 21943 tablet_bootstrap.cc:492] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645: No bootstrap required, opened a new log
I20250114 20:56:50.978935 21943 ts_tablet_manager.cc:1397] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645: Time spent bootstrapping tablet: real 0.011s	user 0.003s	sys 0.006s
I20250114 20:56:50.979137 21945 tablet_bootstrap.cc:492] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100: Bootstrap starting.
I20250114 20:56:50.980820 21943 raft_consensus.cc:357] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b605d082f85044548a7e3d36d909f645" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36051 } }
I20250114 20:56:50.981220 21943 raft_consensus.cc:383] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:50.981427 21943 raft_consensus.cc:738] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b605d082f85044548a7e3d36d909f645, State: Initialized, Role: FOLLOWER
I20250114 20:56:50.981899 21943 consensus_queue.cc:260] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b605d082f85044548a7e3d36d909f645" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36051 } }
I20250114 20:56:50.982338 21943 raft_consensus.cc:397] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:50.982555 21943 raft_consensus.cc:491] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:50.982796 21943 raft_consensus.cc:3054] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:50.984580 21945 tablet_bootstrap.cc:654] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:50.989300 21945 tablet_bootstrap.cc:492] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100: No bootstrap required, opened a new log
I20250114 20:56:50.989655 21945 ts_tablet_manager.cc:1397] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100: Time spent bootstrapping tablet: real 0.011s	user 0.008s	sys 0.000s
I20250114 20:56:50.991416 21945 raft_consensus.cc:357] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "25587d7d11014e408f78502f9a391100" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37369 } }
I20250114 20:56:50.991884 21945 raft_consensus.cc:383] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:50.992092 21945 raft_consensus.cc:738] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 25587d7d11014e408f78502f9a391100, State: Initialized, Role: FOLLOWER
I20250114 20:56:50.992573 21945 consensus_queue.cc:260] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "25587d7d11014e408f78502f9a391100" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37369 } }
I20250114 20:56:50.993023 21945 raft_consensus.cc:397] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:50.993239 21945 raft_consensus.cc:491] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:50.993497 21945 raft_consensus.cc:3054] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:51.037304 21942 raft_consensus.cc:513] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ef56de0536214aeeb98e26b2658c5766" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37273 } }
I20250114 20:56:51.037308 21945 raft_consensus.cc:513] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "25587d7d11014e408f78502f9a391100" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37369 } }
I20250114 20:56:51.037308 21943 raft_consensus.cc:513] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b605d082f85044548a7e3d36d909f645" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36051 } }
I20250114 20:56:51.038352 21942 leader_election.cc:304] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ef56de0536214aeeb98e26b2658c5766; no voters: 
I20250114 20:56:51.038480 21945 leader_election.cc:304] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 25587d7d11014e408f78502f9a391100; no voters: 
I20250114 20:56:51.038542 21943 leader_election.cc:304] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b605d082f85044548a7e3d36d909f645; no voters: 
I20250114 20:56:51.041464 21942 leader_election.cc:290] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:51.041869 21948 raft_consensus.cc:2798] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:51.044517 21950 raft_consensus.cc:2798] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:51.045643 21943 leader_election.cc:290] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:51.047598 21950 raft_consensus.cc:695] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 1 LEADER]: Becoming Leader. State: Replica: b605d082f85044548a7e3d36d909f645, State: Running, Role: LEADER
I20250114 20:56:51.050423 21950 consensus_queue.cc:237] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b605d082f85044548a7e3d36d909f645" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36051 } }
I20250114 20:56:51.049767 21943 ts_tablet_manager.cc:1428] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645: Time spent starting tablet: real 0.071s	user 0.012s	sys 0.005s
I20250114 20:56:51.052763 21942 ts_tablet_manager.cc:1428] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766: Time spent starting tablet: real 0.081s	user 0.022s	sys 0.004s
I20250114 20:56:51.052001 21945 leader_election.cc:290] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:51.054039 21948 raft_consensus.cc:695] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 1 LEADER]: Becoming Leader. State: Replica: ef56de0536214aeeb98e26b2658c5766, State: Running, Role: LEADER
I20250114 20:56:51.052289 21949 raft_consensus.cc:2798] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:51.056130 21945 ts_tablet_manager.cc:1428] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100: Time spent starting tablet: real 0.066s	user 0.007s	sys 0.016s
I20250114 20:56:51.054828 21948 consensus_queue.cc:237] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ef56de0536214aeeb98e26b2658c5766" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37273 } }
I20250114 20:56:51.060391 21949 raft_consensus.cc:695] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 1 LEADER]: Becoming Leader. State: Replica: 25587d7d11014e408f78502f9a391100, State: Running, Role: LEADER
I20250114 20:56:51.061031 21949 consensus_queue.cc:237] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "25587d7d11014e408f78502f9a391100" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37369 } }
I20250114 20:56:51.064786 21525 catalog_manager.cc:5526] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 reported cstate change: term changed from 0 to 1, leader changed from <none> to b605d082f85044548a7e3d36d909f645 (127.19.228.129). New cstate: current_term: 1 leader_uuid: "b605d082f85044548a7e3d36d909f645" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b605d082f85044548a7e3d36d909f645" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36051 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:51.083401 21524 catalog_manager.cc:5526] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 reported cstate change: term changed from 0 to 1, leader changed from <none> to ef56de0536214aeeb98e26b2658c5766 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "ef56de0536214aeeb98e26b2658c5766" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ef56de0536214aeeb98e26b2658c5766" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37273 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:51.096961 21523 catalog_manager.cc:5526] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 reported cstate change: term changed from 0 to 1, leader changed from <none> to 25587d7d11014e408f78502f9a391100 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "25587d7d11014e408f78502f9a391100" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "25587d7d11014e408f78502f9a391100" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37369 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:51.121305 20370 master.cc:537] Master@127.19.228.189:38113 shutting down...
I20250114 20:56:51.135573 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:51.136338 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:51.136652 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P dac1e1a6988a438dbdc36d89518321dc: stopping tablet replica
I20250114 20:56:51.157441 20370 master.cc:559] Master@127.19.228.189:38113 shutdown complete.
W20250114 20:56:52.105958 21740 heartbeater.cc:643] Failed to heartbeat to 127.19.228.189:38113 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
I20250114 20:56:52.619177 21967 raft_consensus.cc:491] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Starting pre-election (detected failure of leader dac1e1a6988a438dbdc36d89518321dc)
I20250114 20:56:52.619279 21968 raft_consensus.cc:491] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader dac1e1a6988a438dbdc36d89518321dc)
I20250114 20:56:52.619685 21967 raft_consensus.cc:513] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:52.619760 21968 raft_consensus.cc:513] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:52.621271 21967 leader_election.cc:290] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113), b7bc7ab1eb254f8eb346cd7c929c8176 (127.19.228.188:46881)
I20250114 20:56:52.622808 21470 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" candidate_term: 2 candidate_status { last_received { term: 1 index: 9 } } ignore_live_leader: false dest_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" is_pre_election: true
I20250114 20:56:52.623198 21968 leader_election.cc:290] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 78c1f705eafc4517abe78ea33eb1ad7f (127.19.228.190:40947), dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113)
I20250114 20:56:52.623529 21470 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b7bc7ab1eb254f8eb346cd7c929c8176 in term 1.
I20250114 20:56:52.624591 21599 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" candidate_term: 2 candidate_status { last_received { term: 1 index: 9 } } ignore_live_leader: false dest_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" is_pre_election: true
I20250114 20:56:52.624936 21576 leader_election.cc:304] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f, b7bc7ab1eb254f8eb346cd7c929c8176; no voters: 
I20250114 20:56:52.625247 21599 raft_consensus.cc:2463] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 78c1f705eafc4517abe78ea33eb1ad7f in term 1.
I20250114 20:56:52.626317 21444 leader_election.cc:304] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f, b7bc7ab1eb254f8eb346cd7c929c8176; no voters: 
I20250114 20:56:52.626041 21968 raft_consensus.cc:2798] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:56:52.627115 21967 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:56:52.627245 21968 raft_consensus.cc:491] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Starting leader election (detected failure of leader dac1e1a6988a438dbdc36d89518321dc)
I20250114 20:56:52.627493 21967 raft_consensus.cc:491] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Starting leader election (detected failure of leader dac1e1a6988a438dbdc36d89518321dc)
I20250114 20:56:52.627893 21967 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:52.629559 21968 raft_consensus.cc:3054] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 1 FOLLOWER]: Advancing to term 2
W20250114 20:56:52.630326 21447 leader_election.cc:336] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
W20250114 20:56:52.631356 21577 leader_election.cc:336] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
I20250114 20:56:52.650269 21968 raft_consensus.cc:513] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:52.650295 21967 raft_consensus.cc:513] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:52.652594 21968 leader_election.cc:290] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 2 election: Requested vote from peers 78c1f705eafc4517abe78ea33eb1ad7f (127.19.228.190:40947), dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113)
I20250114 20:56:52.652634 21470 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" candidate_term: 2 candidate_status { last_received { term: 1 index: 9 } } ignore_live_leader: false dest_uuid: "78c1f705eafc4517abe78ea33eb1ad7f"
W20250114 20:56:52.653366 21577 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111) [suppressed 14 similar messages]
I20250114 20:56:52.653501 21470 raft_consensus.cc:2388] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate b7bc7ab1eb254f8eb346cd7c929c8176 in current term 2: Already voted for candidate 78c1f705eafc4517abe78ea33eb1ad7f in this term.
I20250114 20:56:52.654245 21967 leader_election.cc:290] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [CANDIDATE]: Term 2 election: Requested vote from peers dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113), b7bc7ab1eb254f8eb346cd7c929c8176 (127.19.228.188:46881)
I20250114 20:56:52.655227 21599 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" candidate_term: 2 candidate_status { last_received { term: 1 index: 9 } } ignore_live_leader: false dest_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176"
I20250114 20:56:52.655951 21599 raft_consensus.cc:2388] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 78c1f705eafc4517abe78ea33eb1ad7f in current term 2: Already voted for candidate b7bc7ab1eb254f8eb346cd7c929c8176 in this term.
W20250114 20:56:52.656354 21577 leader_election.cc:336] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
W20250114 20:56:52.656467 21447 leader_election.cc:336] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
I20250114 20:56:52.656833 21577 leader_election.cc:304] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: b7bc7ab1eb254f8eb346cd7c929c8176; no voters: 78c1f705eafc4517abe78ea33eb1ad7f, dac1e1a6988a438dbdc36d89518321dc
I20250114 20:56:52.657050 21444 leader_election.cc:304] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [CANDIDATE]: Term 2 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f; no voters: b7bc7ab1eb254f8eb346cd7c929c8176, dac1e1a6988a438dbdc36d89518321dc
I20250114 20:56:52.657528 21968 raft_consensus.cc:2743] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20250114 20:56:52.657713 21967 raft_consensus.cc:2743] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 2 FOLLOWER]: Leader election lost for term 2. Reason: could not achieve majority
I20250114 20:56:54.480638 21980 raft_consensus.cc:491] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:54.481319 21980 raft_consensus.cc:513] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:54.482995 21980 leader_election.cc:290] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 78c1f705eafc4517abe78ea33eb1ad7f (127.19.228.190:40947), dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113)
I20250114 20:56:54.484570 21470 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" candidate_term: 3 candidate_status { last_received { term: 1 index: 9 } } ignore_live_leader: false dest_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" is_pre_election: true
I20250114 20:56:54.485669 21470 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b7bc7ab1eb254f8eb346cd7c929c8176 in term 2.
I20250114 20:56:54.487120 21576 leader_election.cc:304] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f, b7bc7ab1eb254f8eb346cd7c929c8176; no voters: 
I20250114 20:56:54.487864 21980 raft_consensus.cc:2798] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250114 20:56:54.488144 21980 raft_consensus.cc:491] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:54.488392 21980 raft_consensus.cc:3054] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 2 FOLLOWER]: Advancing to term 3
W20250114 20:56:54.489800 21577 leader_election.cc:336] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
I20250114 20:56:54.494132 21980 raft_consensus.cc:513] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:54.495714 21980 leader_election.cc:290] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 3 election: Requested vote from peers 78c1f705eafc4517abe78ea33eb1ad7f (127.19.228.190:40947), dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113)
I20250114 20:56:54.496474 21470 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "00000000000000000000000000000000" candidate_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" candidate_term: 3 candidate_status { last_received { term: 1 index: 9 } } ignore_live_leader: false dest_uuid: "78c1f705eafc4517abe78ea33eb1ad7f"
I20250114 20:56:54.497056 21470 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 2 FOLLOWER]: Advancing to term 3
W20250114 20:56:54.499004 21577 leader_election.cc:336] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111)
I20250114 20:56:54.502154 21470 raft_consensus.cc:2463] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b7bc7ab1eb254f8eb346cd7c929c8176 in term 3.
I20250114 20:56:54.503046 21576 leader_election.cc:304] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 78c1f705eafc4517abe78ea33eb1ad7f, b7bc7ab1eb254f8eb346cd7c929c8176; no voters: dac1e1a6988a438dbdc36d89518321dc
I20250114 20:56:54.503783 21980 raft_consensus.cc:2798] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 3 FOLLOWER]: Leader election won for term 3
I20250114 20:56:54.504824 21980 raft_consensus.cc:695] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 3 LEADER]: Becoming Leader. State: Replica: b7bc7ab1eb254f8eb346cd7c929c8176, State: Running, Role: LEADER
I20250114 20:56:54.505577 21980 consensus_queue.cc:237] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } }
I20250114 20:56:54.509164 21983 sys_catalog.cc:455] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: SysCatalogTable state changed. Reason: New leader b7bc7ab1eb254f8eb346cd7c929c8176. Latest consensus state: current_term: 3 leader_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:54.509784 21983 sys_catalog.cc:458] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:54.511128 21985 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:56:54.516534 21985 catalog_manager.cc:670] Loaded metadata for table test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a]
I20250114 20:56:54.521009 21985 tablet_loader.cc:96] loaded metadata for tablet 3c8976f2d9f348399cb5330c271b82ba (table test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a])
I20250114 20:56:54.521972 21985 tablet_loader.cc:96] loaded metadata for tablet 4b631ac95b1049d09ad265a6e8cb9ac3 (table test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a])
I20250114 20:56:54.522884 21985 tablet_loader.cc:96] loaded metadata for tablet d5e8b2c663664c98af6e5effc1c5b272 (table test-workload [id=593f4c5a8a674af5a60c2f35ef90f50a])
I20250114 20:56:54.523766 21985 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:56:54.526616 21985 catalog_manager.cc:1260] Loaded cluster ID: 3b3edf3d1e7c4404aadc9cdf7bc24cff
I20250114 20:56:54.526813 21985 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:56:54.530151 21985 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:56:54.533190 21985 catalog_manager.cc:5910] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: Loaded TSK: 0
I20250114 20:56:54.534062 21985 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:56:54.937708 21470 raft_consensus.cc:1270] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 3 FOLLOWER]: Refusing update from remote peer b7bc7ab1eb254f8eb346cd7c929c8176: Log matching property violated. Preceding OpId in replica: term: 1 index: 9. Preceding OpId from leader: term: 3 index: 10. (index mismatch)
I20250114 20:56:54.938984 21983 consensus_queue.cc:1035] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [LEADER]: Connected to new peer: Peer: permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250114 20:56:54.943665 21986 sys_catalog.cc:455] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: SysCatalogTable state changed. Reason: New leader b7bc7ab1eb254f8eb346cd7c929c8176. Latest consensus state: current_term: 3 leader_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:54.944259 21986 sys_catalog.cc:458] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: This master's current role is: FOLLOWER
I20250114 20:56:54.947211 21980 sys_catalog.cc:455] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: SysCatalogTable state changed. Reason: Peer health change. Latest consensus state: current_term: 3 leader_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:54.948022 21980 sys_catalog.cc:458] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:54.950140 21986 sys_catalog.cc:455] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: SysCatalogTable state changed. Reason: Replicated consensus-only round. Latest consensus state: current_term: 3 leader_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "78c1f705eafc4517abe78ea33eb1ad7f" member_type: VOTER last_known_addr { host: "127.19.228.190" port: 40947 } } peers { permanent_uuid: "dac1e1a6988a438dbdc36d89518321dc" member_type: VOTER last_known_addr { host: "127.19.228.189" port: 38113 } } peers { permanent_uuid: "b7bc7ab1eb254f8eb346cd7c929c8176" member_type: VOTER last_known_addr { host: "127.19.228.188" port: 46881 } } }
I20250114 20:56:54.950887 21986 sys_catalog.cc:458] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [sys.catalog]: This master's current role is: FOLLOWER
W20250114 20:56:54.951237 21577 consensus_peers.cc:487] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 -> Peer dac1e1a6988a438dbdc36d89518321dc (127.19.228.189:38113): Couldn't send request to peer dac1e1a6988a438dbdc36d89518321dc. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.189:38113: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:56:56.091675 21739 heartbeater.cc:502] Master 127.19.228.188:46881 was elected leader, sending a full tablet report...
I20250114 20:56:56.098824 21907 heartbeater.cc:502] Master 127.19.228.188:46881 was elected leader, sending a full tablet report...
I20250114 20:56:56.120456 21817 heartbeater.cc:502] Master 127.19.228.188:46881 was elected leader, sending a full tablet report...
I20250114 20:56:56.345288 21655 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:56:56.345603 21655 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:56.644510 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:56:56.664106 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:56.664753 20370 tablet_replica.cc:331] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645: stopping tablet replica
I20250114 20:56:56.665220 20370 raft_consensus.cc:2238] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:56.665671 20370 raft_consensus.cc:2267] T d5e8b2c663664c98af6e5effc1c5b272 P b605d082f85044548a7e3d36d909f645 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:56.683758 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:56:56.693625 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:56:56.710780 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:56.711412 20370 tablet_replica.cc:331] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100: stopping tablet replica
I20250114 20:56:56.711961 20370 raft_consensus.cc:2238] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:56.712416 20370 raft_consensus.cc:2267] T 4b631ac95b1049d09ad265a6e8cb9ac3 P 25587d7d11014e408f78502f9a391100 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:56.729866 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:56:56.738968 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:56:56.756069 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:56:56.756738 20370 tablet_replica.cc:331] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766: stopping tablet replica
I20250114 20:56:56.757284 20370 raft_consensus.cc:2238] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:56:56.757732 20370 raft_consensus.cc:2267] T 3c8976f2d9f348399cb5330c271b82ba P ef56de0536214aeeb98e26b2658c5766 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:56.775519 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:56:56.784602 20370 master.cc:537] Master@127.19.228.190:40947 shutting down...
I20250114 20:56:56.800875 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 3 FOLLOWER]: Raft consensus shutting down.
I20250114 20:56:56.801367 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:56.801636 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 78c1f705eafc4517abe78ea33eb1ad7f: stopping tablet replica
I20250114 20:56:56.819214 20370 master.cc:559] Master@127.19.228.190:40947 shutdown complete.
I20250114 20:56:56.831403 20370 master.cc:537] Master@127.19.228.188:46881 shutting down...
I20250114 20:56:56.851639 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 3 LEADER]: Raft consensus shutting down.
I20250114 20:56:56.852496 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:56:56.852777 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P b7bc7ab1eb254f8eb346cd7c929c8176: stopping tablet replica
I20250114 20:56:56.870399 20370 master.cc:559] Master@127.19.228.188:46881 shutdown complete.
[       OK ] AutoRebalancerTest.NextLeaderResumesAutoRebalancing (8898 ms)
[ RUN      ] AutoRebalancerTest.MovesScheduledIfAddTserver
I20250114 20:56:56.900107 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:41465
I20250114 20:56:56.901176 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:56.906335 22001 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:56.907238 22002 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:56.908049 22004 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:56.908649 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:56.909301 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:56.909475 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:56.909582 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888216909572 us; error 0 us; skew 500 ppm
I20250114 20:56:56.910005 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:56.912431 20370 webserver.cc:458] Webserver started at http://127.19.228.190:39603/ using document root <none> and password file <none>
I20250114 20:56:56.912835 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:56.912984 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:56.913188 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:56.914283 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "7981cb69b8154a7eafd5fcb1a795c598"
format_stamp: "Formatted at 2025-01-14 20:56:56 on dist-test-slave-kc3q"
I20250114 20:56:56.918358 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250114 20:56:56.921315 22009 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:56.921991 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.000s
I20250114 20:56:56.922224 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "7981cb69b8154a7eafd5fcb1a795c598"
format_stamp: "Formatted at 2025-01-14 20:56:56 on dist-test-slave-kc3q"
I20250114 20:56:56.922462 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:56.935884 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:56.937004 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:56.971287 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:41465
I20250114 20:56:56.971374 22060 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:41465 every 8 connection(s)
I20250114 20:56:56.974844 22061 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:56.984102 22061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598: Bootstrap starting.
I20250114 20:56:56.988111 22061 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:56.991811 22061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598: No bootstrap required, opened a new log
I20250114 20:56:56.993638 22061 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7981cb69b8154a7eafd5fcb1a795c598" member_type: VOTER }
I20250114 20:56:56.994010 22061 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:56.994184 22061 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7981cb69b8154a7eafd5fcb1a795c598, State: Initialized, Role: FOLLOWER
I20250114 20:56:56.994657 22061 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7981cb69b8154a7eafd5fcb1a795c598" member_type: VOTER }
I20250114 20:56:56.995057 22061 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:56:56.995237 22061 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:56:56.995431 22061 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:56.999217 22061 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7981cb69b8154a7eafd5fcb1a795c598" member_type: VOTER }
I20250114 20:56:56.999665 22061 leader_election.cc:304] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7981cb69b8154a7eafd5fcb1a795c598; no voters: 
I20250114 20:56:57.000731 22061 leader_election.cc:290] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:56:57.001085 22064 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:57.002341 22064 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 1 LEADER]: Becoming Leader. State: Replica: 7981cb69b8154a7eafd5fcb1a795c598, State: Running, Role: LEADER
I20250114 20:56:57.002854 22064 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7981cb69b8154a7eafd5fcb1a795c598" member_type: VOTER }
I20250114 20:56:57.003463 22061 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:56:57.005465 22065 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7981cb69b8154a7eafd5fcb1a795c598" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7981cb69b8154a7eafd5fcb1a795c598" member_type: VOTER } }
I20250114 20:56:57.005651 22066 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7981cb69b8154a7eafd5fcb1a795c598. Latest consensus state: current_term: 1 leader_uuid: "7981cb69b8154a7eafd5fcb1a795c598" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7981cb69b8154a7eafd5fcb1a795c598" member_type: VOTER } }
I20250114 20:56:57.006105 22065 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:57.006217 22066 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [sys.catalog]: This master's current role is: LEADER
I20250114 20:56:57.009389 22070 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:56:57.013700 22070 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:56:57.019833 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:56:57.021827 22070 catalog_manager.cc:1348] Generated new cluster ID: 5ef472b14f61457885310b2902cf5429
I20250114 20:56:57.022100 22070 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:56:57.058786 22070 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:56:57.059983 22070 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:56:57.071262 22070 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598: Generated new TSK 0
I20250114 20:56:57.071864 22070 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:56:57.086344 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:57.092185 22082 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:57.093094 22083 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:57.094480 20370 server_base.cc:1034] running on GCE node
W20250114 20:56:57.095319 22085 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:57.096056 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:57.096239 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:57.096382 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888217096366 us; error 0 us; skew 500 ppm
I20250114 20:56:57.096824 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:57.099114 20370 webserver.cc:458] Webserver started at http://127.19.228.129:46869/ using document root <none> and password file <none>
I20250114 20:56:57.099555 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:57.099730 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:57.099973 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:57.100997 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "0cae0dfa0e1845c2b2a23c44aa89818b"
format_stamp: "Formatted at 2025-01-14 20:56:57 on dist-test-slave-kc3q"
I20250114 20:56:57.105037 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:56:57.108016 22090 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.108664 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:56:57.108906 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "0cae0dfa0e1845c2b2a23c44aa89818b"
format_stamp: "Formatted at 2025-01-14 20:56:57 on dist-test-slave-kc3q"
I20250114 20:56:57.109160 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:57.123867 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:57.124969 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:57.126319 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:57.128480 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:57.128664 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.128882 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:57.129027 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.166369 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:46223
I20250114 20:56:57.166445 22152 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:46223 every 8 connection(s)
I20250114 20:56:57.170568 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:57.177779 22157 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:57.178440 22158 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:57.181438 22161 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:57.182070 22153 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41465
I20250114 20:56:57.182163 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:57.182411 22153 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:57.183019 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
I20250114 20:56:57.183112 22153 heartbeater.cc:510] Master 127.19.228.190:41465 requested a full tablet report, sending...
W20250114 20:56:57.183223 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:57.183498 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888217183479 us; error 0 us; skew 500 ppm
I20250114 20:56:57.184116 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:57.185329 22026 ts_manager.cc:194] Registered new tserver with Master: 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129:46223)
I20250114 20:56:57.186726 20370 webserver.cc:458] Webserver started at http://127.19.228.130:40755/ using document root <none> and password file <none>
I20250114 20:56:57.187076 22026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:56098
I20250114 20:56:57.187172 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:57.187430 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:57.187711 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:57.188818 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "768a6f8569914306a767c7084f2c3807"
format_stamp: "Formatted at 2025-01-14 20:56:57 on dist-test-slave-kc3q"
I20250114 20:56:57.193229 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:56:57.196310 22165 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.197005 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:56:57.197254 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "768a6f8569914306a767c7084f2c3807"
format_stamp: "Formatted at 2025-01-14 20:56:57 on dist-test-slave-kc3q"
I20250114 20:56:57.197502 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:57.221078 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:57.222124 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:57.223445 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:57.225605 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:57.225790 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.226003 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:57.226148 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.262507 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:39817
I20250114 20:56:57.262590 22227 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:39817 every 8 connection(s)
I20250114 20:56:57.266839 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:56:57.274408 22231 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:56:57.275385 22232 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:57.278146 22228 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41465
W20250114 20:56:57.278425 22234 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:56:57.278501 22228 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:57.279326 22228 heartbeater.cc:510] Master 127.19.228.190:41465 requested a full tablet report, sending...
I20250114 20:56:57.280042 20370 server_base.cc:1034] running on GCE node
I20250114 20:56:57.280937 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:56:57.281134 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:56:57.281284 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888217281269 us; error 0 us; skew 500 ppm
I20250114 20:56:57.281266 22026 ts_manager.cc:194] Registered new tserver with Master: 768a6f8569914306a767c7084f2c3807 (127.19.228.130:39817)
I20250114 20:56:57.281996 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:56:57.282759 22026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:56104
I20250114 20:56:57.284546 20370 webserver.cc:458] Webserver started at http://127.19.228.131:42373/ using document root <none> and password file <none>
I20250114 20:56:57.285171 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:56:57.285341 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:56:57.285539 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:56:57.286585 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "cfca5dc0cb7c4fa5bfe6047924c88355"
format_stamp: "Formatted at 2025-01-14 20:56:57 on dist-test-slave-kc3q"
I20250114 20:56:57.290644 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250114 20:56:57.293558 22239 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.294253 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:56:57.294507 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "cfca5dc0cb7c4fa5bfe6047924c88355"
format_stamp: "Formatted at 2025-01-14 20:56:57 on dist-test-slave-kc3q"
I20250114 20:56:57.294768 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:56:57.325059 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:56:57.326157 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:56:57.327468 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:56:57.332948 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:56:57.333149 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.333385 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:56:57.333581 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:56:57.369889 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:43559
I20250114 20:56:57.369976 22301 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:43559 every 8 connection(s)
I20250114 20:56:57.381657 22302 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41465
I20250114 20:56:57.381985 22302 heartbeater.cc:463] Registering TS with master...
I20250114 20:56:57.382633 22302 heartbeater.cc:510] Master 127.19.228.190:41465 requested a full tablet report, sending...
I20250114 20:56:57.384290 22026 ts_manager.cc:194] Registered new tserver with Master: cfca5dc0cb7c4fa5bfe6047924c88355 (127.19.228.131:43559)
I20250114 20:56:57.384861 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012106086s
I20250114 20:56:57.385846 22026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:56120
I20250114 20:56:58.189419 22153 heartbeater.cc:502] Master 127.19.228.190:41465 was elected leader, sending a full tablet report...
I20250114 20:56:58.285454 22228 heartbeater.cc:502] Master 127.19.228.190:41465 was elected leader, sending a full tablet report...
I20250114 20:56:58.388341 22302 heartbeater.cc:502] Master 127.19.228.190:41465 was elected leader, sending a full tablet report...
I20250114 20:56:58.416807 20370 test_util.cc:274] Using random seed: -851052908
I20250114 20:56:58.437347 22026 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:56128:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377?""\004\001\000\377\377\377?"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:56:58.439288 22026 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:56:58.480152 22267 tablet_service.cc:1467] Processing CreateTablet for tablet f795b6386c1e489c8cf473b607fcba0e (DEFAULT_TABLE table=test-workload [id=8b3087e037c74922af8db65139eeef08]), partition=RANGE (key) PARTITION VALUES < 1073741823
I20250114 20:56:58.480636 22266 tablet_service.cc:1467] Processing CreateTablet for tablet 5ffa45527aab43eaa6d386859f0097c4 (DEFAULT_TABLE table=test-workload [id=8b3087e037c74922af8db65139eeef08]), partition=RANGE (key) PARTITION 1073741823 <= VALUES
I20250114 20:56:58.481498 22267 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f795b6386c1e489c8cf473b607fcba0e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:58.482103 22266 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5ffa45527aab43eaa6d386859f0097c4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:58.499650 22322 tablet_bootstrap.cc:492] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355: Bootstrap starting.
I20250114 20:56:58.500284 22193 tablet_service.cc:1467] Processing CreateTablet for tablet f795b6386c1e489c8cf473b607fcba0e (DEFAULT_TABLE table=test-workload [id=8b3087e037c74922af8db65139eeef08]), partition=RANGE (key) PARTITION VALUES < 1073741823
I20250114 20:56:58.501685 22193 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f795b6386c1e489c8cf473b607fcba0e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:58.504020 22322 tablet_bootstrap.cc:654] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:58.504655 22192 tablet_service.cc:1467] Processing CreateTablet for tablet 5ffa45527aab43eaa6d386859f0097c4 (DEFAULT_TABLE table=test-workload [id=8b3087e037c74922af8db65139eeef08]), partition=RANGE (key) PARTITION 1073741823 <= VALUES
I20250114 20:56:58.505968 22192 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5ffa45527aab43eaa6d386859f0097c4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:58.514585 22118 tablet_service.cc:1467] Processing CreateTablet for tablet f795b6386c1e489c8cf473b607fcba0e (DEFAULT_TABLE table=test-workload [id=8b3087e037c74922af8db65139eeef08]), partition=RANGE (key) PARTITION VALUES < 1073741823
I20250114 20:56:58.515995 22118 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f795b6386c1e489c8cf473b607fcba0e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:58.525386 22324 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: Bootstrap starting.
I20250114 20:56:58.525141 22117 tablet_service.cc:1467] Processing CreateTablet for tablet 5ffa45527aab43eaa6d386859f0097c4 (DEFAULT_TABLE table=test-workload [id=8b3087e037c74922af8db65139eeef08]), partition=RANGE (key) PARTITION 1073741823 <= VALUES
I20250114 20:56:58.525590 22322 tablet_bootstrap.cc:492] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355: No bootstrap required, opened a new log
I20250114 20:56:58.526105 22322 ts_tablet_manager.cc:1397] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355: Time spent bootstrapping tablet: real 0.027s	user 0.020s	sys 0.000s
I20250114 20:56:58.526432 22117 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5ffa45527aab43eaa6d386859f0097c4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:56:58.528750 22322 raft_consensus.cc:357] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.529533 22322 raft_consensus.cc:383] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:58.529861 22322 raft_consensus.cc:738] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cfca5dc0cb7c4fa5bfe6047924c88355, State: Initialized, Role: FOLLOWER
I20250114 20:56:58.530561 22322 consensus_queue.cc:260] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.531656 22324 tablet_bootstrap.cc:654] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:58.543849 22322 ts_tablet_manager.cc:1428] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355: Time spent starting tablet: real 0.017s	user 0.007s	sys 0.012s
I20250114 20:56:58.544936 22322 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355: Bootstrap starting.
I20250114 20:56:58.550927 22322 tablet_bootstrap.cc:654] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:58.554558 22327 tablet_bootstrap.cc:492] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b: Bootstrap starting.
I20250114 20:56:58.554994 22324 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: No bootstrap required, opened a new log
I20250114 20:56:58.555516 22324 ts_tablet_manager.cc:1397] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: Time spent bootstrapping tablet: real 0.030s	user 0.017s	sys 0.013s
I20250114 20:56:58.558904 22322 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355: No bootstrap required, opened a new log
I20250114 20:56:58.558429 22324 raft_consensus.cc:357] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.559314 22322 ts_tablet_manager.cc:1397] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355: Time spent bootstrapping tablet: real 0.015s	user 0.008s	sys 0.004s
I20250114 20:56:58.559314 22324 raft_consensus.cc:383] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:58.559867 22324 raft_consensus.cc:738] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 768a6f8569914306a767c7084f2c3807, State: Initialized, Role: FOLLOWER
I20250114 20:56:58.560810 22327 tablet_bootstrap.cc:654] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:58.560655 22324 consensus_queue.cc:260] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.561554 22322 raft_consensus.cc:357] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.562077 22322 raft_consensus.cc:383] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:58.562386 22322 raft_consensus.cc:738] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cfca5dc0cb7c4fa5bfe6047924c88355, State: Initialized, Role: FOLLOWER
I20250114 20:56:58.563776 22324 ts_tablet_manager.cc:1428] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: Time spent starting tablet: real 0.008s	user 0.003s	sys 0.004s
I20250114 20:56:58.564329 22322 consensus_queue.cc:260] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.565099 22324 tablet_bootstrap.cc:492] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807: Bootstrap starting.
I20250114 20:56:58.565863 22327 tablet_bootstrap.cc:492] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b: No bootstrap required, opened a new log
I20250114 20:56:58.566329 22327 ts_tablet_manager.cc:1397] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b: Time spent bootstrapping tablet: real 0.012s	user 0.007s	sys 0.004s
I20250114 20:56:58.566628 22322 ts_tablet_manager.cc:1428] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355: Time spent starting tablet: real 0.007s	user 0.004s	sys 0.001s
I20250114 20:56:58.568805 22327 raft_consensus.cc:357] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.569502 22327 raft_consensus.cc:383] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:58.569725 22327 raft_consensus.cc:738] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0cae0dfa0e1845c2b2a23c44aa89818b, State: Initialized, Role: FOLLOWER
I20250114 20:56:58.570338 22327 consensus_queue.cc:260] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.571615 22324 tablet_bootstrap.cc:654] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:58.577986 22324 tablet_bootstrap.cc:492] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807: No bootstrap required, opened a new log
I20250114 20:56:58.578478 22324 ts_tablet_manager.cc:1397] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807: Time spent bootstrapping tablet: real 0.014s	user 0.006s	sys 0.005s
I20250114 20:56:58.579288 22327 ts_tablet_manager.cc:1428] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b: Time spent starting tablet: real 0.013s	user 0.004s	sys 0.007s
I20250114 20:56:58.581017 22324 raft_consensus.cc:357] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.581730 22327 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: Bootstrap starting.
I20250114 20:56:58.581765 22324 raft_consensus.cc:383] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:58.582221 22324 raft_consensus.cc:738] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 768a6f8569914306a767c7084f2c3807, State: Initialized, Role: FOLLOWER
I20250114 20:56:58.582897 22324 consensus_queue.cc:260] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.585115 22324 ts_tablet_manager.cc:1428] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807: Time spent starting tablet: real 0.006s	user 0.005s	sys 0.000s
I20250114 20:56:58.590528 22327 tablet_bootstrap.cc:654] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: Neither blocks nor log segments found. Creating new log.
I20250114 20:56:58.594410 22327 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: No bootstrap required, opened a new log
I20250114 20:56:58.594790 22327 ts_tablet_manager.cc:1397] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: Time spent bootstrapping tablet: real 0.013s	user 0.009s	sys 0.000s
I20250114 20:56:58.596827 22327 raft_consensus.cc:357] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.597460 22327 raft_consensus.cc:383] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:56:58.597728 22327 raft_consensus.cc:738] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0cae0dfa0e1845c2b2a23c44aa89818b, State: Initialized, Role: FOLLOWER
I20250114 20:56:58.598295 22327 consensus_queue.cc:260] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.599934 22327 ts_tablet_manager.cc:1428] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: Time spent starting tablet: real 0.005s	user 0.006s	sys 0.000s
I20250114 20:56:58.600487 22331 raft_consensus.cc:491] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:58.600852 22331 raft_consensus.cc:513] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.602717 22331 leader_election.cc:290] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 768a6f8569914306a767c7084f2c3807 (127.19.228.130:39817), cfca5dc0cb7c4fa5bfe6047924c88355 (127.19.228.131:43559)
I20250114 20:56:58.613365 22203 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "768a6f8569914306a767c7084f2c3807" is_pre_election: true
I20250114 20:56:58.613943 22203 raft_consensus.cc:2463] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 0.
I20250114 20:56:58.614876 22092 leader_election.cc:304] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cae0dfa0e1845c2b2a23c44aa89818b, 768a6f8569914306a767c7084f2c3807; no voters: 
I20250114 20:56:58.615502 22331 raft_consensus.cc:2798] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:58.615835 22331 raft_consensus.cc:491] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:58.616171 22331 raft_consensus.cc:3054] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:58.616173 22277 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" is_pre_election: true
I20250114 20:56:58.616967 22277 raft_consensus.cc:2463] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 0.
I20250114 20:56:58.620927 22331 raft_consensus.cc:513] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.622356 22331 leader_election.cc:290] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 election: Requested vote from peers 768a6f8569914306a767c7084f2c3807 (127.19.228.130:39817), cfca5dc0cb7c4fa5bfe6047924c88355 (127.19.228.131:43559)
I20250114 20:56:58.623087 22203 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "768a6f8569914306a767c7084f2c3807"
I20250114 20:56:58.623250 22277 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355"
I20250114 20:56:58.623595 22203 raft_consensus.cc:3054] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:58.623800 22277 raft_consensus.cc:3054] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:58.628500 22277 raft_consensus.cc:2463] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 1.
I20250114 20:56:58.628512 22203 raft_consensus.cc:2463] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 1.
I20250114 20:56:58.629618 22094 leader_election.cc:304] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cae0dfa0e1845c2b2a23c44aa89818b, cfca5dc0cb7c4fa5bfe6047924c88355; no voters: 
I20250114 20:56:58.630295 22331 raft_consensus.cc:2798] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:58.631240 22331 raft_consensus.cc:695] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Becoming Leader. State: Replica: 0cae0dfa0e1845c2b2a23c44aa89818b, State: Running, Role: LEADER
I20250114 20:56:58.631924 22331 consensus_queue.cc:237] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:58.639420 22026 catalog_manager.cc:5526] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b reported cstate change: term changed from 0 to 1, leader changed from <none> to 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129). New cstate: current_term: 1 leader_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:58.709898 22331 raft_consensus.cc:491] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:56:58.710296 22331 raft_consensus.cc:513] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.711730 22331 leader_election.cc:290] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers cfca5dc0cb7c4fa5bfe6047924c88355 (127.19.228.131:43559), 768a6f8569914306a767c7084f2c3807 (127.19.228.130:39817)
I20250114 20:56:58.712522 22277 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f795b6386c1e489c8cf473b607fcba0e" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" is_pre_election: true
I20250114 20:56:58.712723 22203 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f795b6386c1e489c8cf473b607fcba0e" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "768a6f8569914306a767c7084f2c3807" is_pre_election: true
I20250114 20:56:58.713189 22277 raft_consensus.cc:2463] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 0.
I20250114 20:56:58.713305 22203 raft_consensus.cc:2463] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 0.
I20250114 20:56:58.714018 22094 leader_election.cc:304] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cae0dfa0e1845c2b2a23c44aa89818b, cfca5dc0cb7c4fa5bfe6047924c88355; no voters: 
I20250114 20:56:58.714803 22331 raft_consensus.cc:2798] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:56:58.715063 22331 raft_consensus.cc:491] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:56:58.715302 22331 raft_consensus.cc:3054] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:58.719172 22331 raft_consensus.cc:513] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.720522 22331 leader_election.cc:290] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 election: Requested vote from peers cfca5dc0cb7c4fa5bfe6047924c88355 (127.19.228.131:43559), 768a6f8569914306a767c7084f2c3807 (127.19.228.130:39817)
I20250114 20:56:58.721305 22277 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f795b6386c1e489c8cf473b607fcba0e" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355"
I20250114 20:56:58.721449 22203 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f795b6386c1e489c8cf473b607fcba0e" candidate_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "768a6f8569914306a767c7084f2c3807"
I20250114 20:56:58.721750 22277 raft_consensus.cc:3054] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:58.721912 22203 raft_consensus.cc:3054] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:56:58.725884 22277 raft_consensus.cc:2463] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 1.
I20250114 20:56:58.726015 22203 raft_consensus.cc:2463] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0cae0dfa0e1845c2b2a23c44aa89818b in term 1.
I20250114 20:56:58.726728 22094 leader_election.cc:304] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cae0dfa0e1845c2b2a23c44aa89818b, cfca5dc0cb7c4fa5bfe6047924c88355; no voters: 
I20250114 20:56:58.727326 22331 raft_consensus.cc:2798] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:56:58.727705 22331 raft_consensus.cc:695] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Becoming Leader. State: Replica: 0cae0dfa0e1845c2b2a23c44aa89818b, State: Running, Role: LEADER
I20250114 20:56:58.728375 22331 consensus_queue.cc:237] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } }
I20250114 20:56:58.734087 22026 catalog_manager.cc:5526] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b reported cstate change: term changed from 0 to 1, leader changed from <none> to 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129). New cstate: current_term: 1 leader_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } health_report { overall_health: HEALTHY } } }
I20250114 20:56:59.019781 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:56:59.030668 22128 tablet_service.cc:1967] Received LeaderStepDown RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4"
dest_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b"
mode: GRACEFUL
new_leader_uuid: "768a6f8569914306a767c7084f2c3807"
 from {username='slave'} at 127.0.0.1:45238
I20250114 20:56:59.031229 22128 raft_consensus.cc:604] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Received request to transfer leadership to TS 768a6f8569914306a767c7084f2c3807
I20250114 20:56:59.032197 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 1
I20250114 20:56:59.032548 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:56:59.123844 22331 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [LEADER]: Connected to new peer: Peer: permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:59.129778 22336 raft_consensus.cc:988] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: : Instructing follower 768a6f8569914306a767c7084f2c3807 to start an election
I20250114 20:56:59.130436 22331 raft_consensus.cc:1076] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Signalling peer 768a6f8569914306a767c7084f2c3807 to start an election
I20250114 20:56:59.136636 22202 tablet_service.cc:1939] Received Run Leader Election RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4"
dest_uuid: "768a6f8569914306a767c7084f2c3807"
 from {username='slave'} at 127.0.0.1:41502
I20250114 20:56:59.137962 22202 raft_consensus.cc:491] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250114 20:56:59.138339 22202 raft_consensus.cc:3054] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:59.142424 22336 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [LEADER]: Connected to new peer: Peer: permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:59.144788 22202 raft_consensus.cc:513] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:59.147244 22202 leader_election.cc:290] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [CANDIDATE]: Term 2 election: Requested vote from peers 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129:46223), cfca5dc0cb7c4fa5bfe6047924c88355 (127.19.228.131:43559)
I20250114 20:56:59.154292 22336 consensus_queue.cc:1035] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [LEADER]: Connected to new peer: Peer: permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:59.163626 22128 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4" candidate_uuid: "768a6f8569914306a767c7084f2c3807" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b"
I20250114 20:56:59.164417 22128 raft_consensus.cc:3049] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Stepping down as leader of term 1
I20250114 20:56:59.164774 22128 raft_consensus.cc:738] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 0cae0dfa0e1845c2b2a23c44aa89818b, State: Running, Role: LEADER
I20250114 20:56:59.165513 22128 consensus_queue.cc:260] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:59.166664 22128 raft_consensus.cc:3054] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:59.171672 22275 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5ffa45527aab43eaa6d386859f0097c4" candidate_uuid: "768a6f8569914306a767c7084f2c3807" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355"
I20250114 20:56:59.172381 22275 raft_consensus.cc:3054] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:56:59.175123 22128 raft_consensus.cc:2463] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 768a6f8569914306a767c7084f2c3807 in term 2.
I20250114 20:56:59.176447 22166 leader_election.cc:304] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0cae0dfa0e1845c2b2a23c44aa89818b, 768a6f8569914306a767c7084f2c3807; no voters: 
I20250114 20:56:59.177292 22341 raft_consensus.cc:2798] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:56:59.179420 22336 consensus_queue.cc:1035] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [LEADER]: Connected to new peer: Peer: permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:56:59.186873 22275 raft_consensus.cc:2463] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 768a6f8569914306a767c7084f2c3807 in term 2.
I20250114 20:56:59.205601 22341 raft_consensus.cc:695] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 LEADER]: Becoming Leader. State: Replica: 768a6f8569914306a767c7084f2c3807, State: Running, Role: LEADER
I20250114 20:56:59.206496 22341 consensus_queue.cc:237] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } }
I20250114 20:56:59.212450 22026 catalog_manager.cc:5526] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 reported cstate change: term changed from 1 to 2, leader changed from 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129) to 768a6f8569914306a767c7084f2c3807 (127.19.228.130). New cstate: current_term: 2 leader_uuid: "768a6f8569914306a767c7084f2c3807" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } health_report { overall_health: UNKNOWN } } }
I20250114 20:56:59.602707 22275 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:56:59.603830 22341 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:56:59.611181 22128 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:56:59.612541 22341 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:57:01.033272 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:01.035188 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:01.035504 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:03.036268 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:03.038046 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:03.038372 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:05.039371 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:05.041851 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:05.042194 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:07.043037 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:07.044847 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:07.045167 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:09.046036 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:09.048139 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:09.048442 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:09.834369 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:09.841337 22363 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:09.841846 22364 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:09.842867 22366 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:09.843650 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:09.844493 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:09.844700 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:09.844830 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888229844819 us; error 0 us; skew 500 ppm
I20250114 20:57:09.845248 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:09.847819 20370 webserver.cc:458] Webserver started at http://127.19.228.132:43445/ using document root <none> and password file <none>
I20250114 20:57:09.848273 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:09.848440 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:09.848697 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:09.849942 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "57ddda836f3f46e89279d64cd71ea837"
format_stamp: "Formatted at 2025-01-14 20:57:09 on dist-test-slave-kc3q"
I20250114 20:57:09.854408 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.002s
I20250114 20:57:09.857560 22371 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:09.858296 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:57:09.858551 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "57ddda836f3f46e89279d64cd71ea837"
format_stamp: "Formatted at 2025-01-14 20:57:09 on dist-test-slave-kc3q"
I20250114 20:57:09.858793 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:09.873919 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:09.875072 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:09.876499 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:09.878815 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:09.878990 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:09.879168 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:09.879295 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:09.916960 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:46653
I20250114 20:57:09.917037 22433 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:46653 every 8 connection(s)
I20250114 20:57:09.930565 22434 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41465
I20250114 20:57:09.930887 22434 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:09.931519 22434 heartbeater.cc:510] Master 127.19.228.190:41465 requested a full tablet report, sending...
I20250114 20:57:09.933169 22026 ts_manager.cc:194] Registered new tserver with Master: 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132:46653)
I20250114 20:57:09.934340 22026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:38222
I20250114 20:57:10.077644 22202 consensus_queue.cc:237] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } }
I20250114 20:57:10.084095 22275 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:57:10.085565 22128 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:57:10.086015 22362 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.001s
I20250114 20:57:10.087108 22441 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
W20250114 20:57:10.096498 22169 consensus_peers.cc:487] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 -> Peer 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132:46653): Couldn't send request to peer 57ddda836f3f46e89279d64cd71ea837. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 5ffa45527aab43eaa6d386859f0097c4. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:57:10.098165 22362 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 LEADER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } } }
I20250114 20:57:10.099958 22275 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } } }
I20250114 20:57:10.101212 22128 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } } }
I20250114 20:57:10.111730 22026 catalog_manager.cc:5526] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 reported cstate change: config changed from index -1 to 3, NON_VOTER 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) added. New cstate: current_term: 2 leader_uuid: "768a6f8569914306a767c7084f2c3807" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } } }
I20250114 20:57:10.704278 22452 ts_tablet_manager.cc:927] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: Initiating tablet copy from peer 768a6f8569914306a767c7084f2c3807 (127.19.228.130:39817)
I20250114 20:57:10.706077 22452 tablet_copy_client.cc:323] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:39817
I20250114 20:57:10.722240 22213 tablet_copy_service.cc:140] P 768a6f8569914306a767c7084f2c3807: Received BeginTabletCopySession request for tablet 5ffa45527aab43eaa6d386859f0097c4 from peer 57ddda836f3f46e89279d64cd71ea837 ({username='slave'} at 127.0.0.1:50980)
I20250114 20:57:10.722860 22213 tablet_copy_service.cc:161] P 768a6f8569914306a767c7084f2c3807: Beginning new tablet copy session on tablet 5ffa45527aab43eaa6d386859f0097c4 from peer 57ddda836f3f46e89279d64cd71ea837 at {username='slave'} at 127.0.0.1:50980: session id = 57ddda836f3f46e89279d64cd71ea837-5ffa45527aab43eaa6d386859f0097c4
I20250114 20:57:10.729138 22213 tablet_copy_source_session.cc:215] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:57:10.732182 22452 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5ffa45527aab43eaa6d386859f0097c4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:10.741703 22452 tablet_copy_client.cc:806] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: tablet copy: Starting download of 0 data blocks...
I20250114 20:57:10.742435 22452 tablet_copy_client.cc:670] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: tablet copy: Starting download of 1 WAL segments...
I20250114 20:57:10.745673 22452 tablet_copy_client.cc:538] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:57:10.751663 22452 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: Bootstrap starting.
I20250114 20:57:10.768622 22452 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: Bootstrap replayed 1/1 log segments. Stats: ops{read=3 overwritten=0 applied=3 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:57:10.769352 22452 tablet_bootstrap.cc:492] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: Bootstrap complete.
I20250114 20:57:10.769834 22452 ts_tablet_manager.cc:1397] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: Time spent bootstrapping tablet: real 0.018s	user 0.019s	sys 0.001s
I20250114 20:57:10.772009 22452 raft_consensus.cc:357] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } }
I20250114 20:57:10.772611 22452 raft_consensus.cc:738] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 57ddda836f3f46e89279d64cd71ea837, State: Initialized, Role: LEARNER
I20250114 20:57:10.773172 22452 consensus_queue.cc:260] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 3, Last appended: 2.3, Last appended by leader: 3, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: true } }
I20250114 20:57:10.775346 22434 heartbeater.cc:502] Master 127.19.228.190:41465 was elected leader, sending a full tablet report...
I20250114 20:57:10.775843 22452 ts_tablet_manager.cc:1428] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.004s
I20250114 20:57:10.777686 22213 tablet_copy_service.cc:342] P 768a6f8569914306a767c7084f2c3807: Request end of tablet copy session 57ddda836f3f46e89279d64cd71ea837-5ffa45527aab43eaa6d386859f0097c4 received from {username='slave'} at 127.0.0.1:50980
I20250114 20:57:10.778126 22213 tablet_copy_service.cc:434] P 768a6f8569914306a767c7084f2c3807: ending tablet copy session 57ddda836f3f46e89279d64cd71ea837-5ffa45527aab43eaa6d386859f0097c4 on tablet 5ffa45527aab43eaa6d386859f0097c4 with peer 57ddda836f3f46e89279d64cd71ea837
I20250114 20:57:11.049222 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:11.051168 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:11.051582 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:11.290956 22409 raft_consensus.cc:1212] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.2->[2.3-2.3]   Dedup: 2.3->[]
I20250114 20:57:11.853945 22459 raft_consensus.cc:1059] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: attempting to promote NON_VOTER 57ddda836f3f46e89279d64cd71ea837 to VOTER
I20250114 20:57:11.856010 22459 consensus_queue.cc:237] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 2.3, Last appended by leader: 1, Current term: 2, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } }
I20250114 20:57:11.861577 22275 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:57:11.862552 22409 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 LEARNER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:57:11.863531 22461 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.001s
I20250114 20:57:11.865216 22461 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:57:11.867678 22128 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:57:11.869060 22461 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:57:11.873907 22459 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 LEADER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.876031 22409 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.879175 22128 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.883107 22275 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.887269 22024 catalog_manager.cc:5526] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 reported cstate change: config changed from index 3 to 4, 57ddda836f3f46e89279d64cd71ea837 (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "768a6f8569914306a767c7084f2c3807" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "0cae0dfa0e1845c2b2a23c44aa89818b" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 46223 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250114 20:57:11.916204 22202 consensus_queue.cc:237] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 2.4, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } }
I20250114 20:57:11.920480 22409 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 4. Preceding OpId from leader: term: 2 index: 5. (index mismatch)
I20250114 20:57:11.921973 22461 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:57:11.922291 22275 raft_consensus.cc:1270] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Refusing update from remote peer 768a6f8569914306a767c7084f2c3807: Log matching property violated. Preceding OpId in replica: term: 2 index: 4. Preceding OpId from leader: term: 2 index: 5. (index mismatch)
I20250114 20:57:11.923466 22459 consensus_queue.cc:1035] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:57:11.928228 22461 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 LEADER]: Committing config change with OpId 2.5: config changed from index 4 to 5, VOTER 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129) evicted. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.930331 22409 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 FOLLOWER]: Committing config change with OpId 2.5: config changed from index 4 to 5, VOTER 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129) evicted. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.931957 22275 raft_consensus.cc:2949] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Committing config change with OpId 2.5: config changed from index 4 to 5, VOTER 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129) evicted. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.937506 22011 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 5ffa45527aab43eaa6d386859f0097c4 with cas_config_opid_index 4: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:57:11.939502 22025 catalog_manager.cc:5526] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 reported cstate change: config changed from index 4 to 5, VOTER 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129) evicted. New cstate: current_term: 2 leader_uuid: "768a6f8569914306a767c7084f2c3807" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "768a6f8569914306a767c7084f2c3807" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39817 } } peers { permanent_uuid: "cfca5dc0cb7c4fa5bfe6047924c88355" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 43559 } } peers { permanent_uuid: "57ddda836f3f46e89279d64cd71ea837" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 46653 } attrs { promote: false } } }
I20250114 20:57:11.949470 22118 tablet_service.cc:1514] Processing DeleteTablet for tablet 5ffa45527aab43eaa6d386859f0097c4 with delete_type TABLET_DATA_TOMBSTONED (TS 0cae0dfa0e1845c2b2a23c44aa89818b not found in new config with opid_index 5) from {username='slave'} at 127.0.0.1:45234
I20250114 20:57:11.951511 22473 tablet_replica.cc:331] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: stopping tablet replica
I20250114 20:57:11.952280 22473 raft_consensus.cc:2238] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:57:11.952853 22473 raft_consensus.cc:2267] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:11.955183 22473 ts_tablet_manager.cc:1905] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:57:11.964768 22473 ts_tablet_manager.cc:1918] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.4
I20250114 20:57:11.965060 22473 log.cc:1198] T 5ffa45527aab43eaa6d386859f0097c4 P 0cae0dfa0e1845c2b2a23c44aa89818b: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.MovesScheduledIfAddTserver.1736888194149553-20370-0/minicluster-data/ts-0-root/wals/5ffa45527aab43eaa6d386859f0097c4
I20250114 20:57:11.966291 22010 catalog_manager.cc:4872] TS 0cae0dfa0e1845c2b2a23c44aa89818b (127.19.228.129:46223): tablet 5ffa45527aab43eaa6d386859f0097c4 (table test-workload [id=8b3087e037c74922af8db65139eeef08]) successfully deleted
I20250114 20:57:13.052433 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:13.054288 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:13.054594 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:15.055711 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:15.057823 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:15.058144 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:17.059001 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:17.060969 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:17.061332 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:19.062144 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:19.063984 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:19.064292 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:21.065058 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:21.066851 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:21.067162 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:23.068032 22079 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:23.069895 22079 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:23.070209 22079 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:23.270809 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:57:23.292893 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:23.293576 20370 tablet_replica.cc:331] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b: stopping tablet replica
I20250114 20:57:23.294016 20370 raft_consensus.cc:2238] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:57:23.294847 20370 raft_consensus.cc:2267] T f795b6386c1e489c8cf473b607fcba0e P 0cae0dfa0e1845c2b2a23c44aa89818b [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.315635 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:57:23.327028 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:57:23.347007 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:23.347726 20370 tablet_replica.cc:331] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807: stopping tablet replica
I20250114 20:57:23.348295 20370 raft_consensus.cc:2238] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:57:23.348739 20370 raft_consensus.cc:2267] T f795b6386c1e489c8cf473b607fcba0e P 768a6f8569914306a767c7084f2c3807 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.350579 20370 tablet_replica.cc:331] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807: stopping tablet replica
I20250114 20:57:23.350996 20370 raft_consensus.cc:2238] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:57:23.351831 20370 raft_consensus.cc:2267] T 5ffa45527aab43eaa6d386859f0097c4 P 768a6f8569914306a767c7084f2c3807 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.372619 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:57:23.383527 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:57:23.402292 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:23.402827 20370 tablet_replica.cc:331] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355: stopping tablet replica
I20250114 20:57:23.403383 20370 raft_consensus.cc:2238] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:57:23.404273 20370 raft_consensus.cc:2267] T 5ffa45527aab43eaa6d386859f0097c4 P cfca5dc0cb7c4fa5bfe6047924c88355 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.406206 20370 tablet_replica.cc:331] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355: stopping tablet replica
I20250114 20:57:23.406606 20370 raft_consensus.cc:2238] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:57:23.407035 20370 raft_consensus.cc:2267] T f795b6386c1e489c8cf473b607fcba0e P cfca5dc0cb7c4fa5bfe6047924c88355 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.425637 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:57:23.435803 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:57:23.452486 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:23.453048 20370 tablet_replica.cc:331] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837: stopping tablet replica
I20250114 20:57:23.453544 20370 raft_consensus.cc:2238] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:57:23.454147 20370 raft_consensus.cc:2267] T 5ffa45527aab43eaa6d386859f0097c4 P 57ddda836f3f46e89279d64cd71ea837 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.471967 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
I20250114 20:57:23.480827 20370 master.cc:537] Master@127.19.228.190:41465 shutting down...
I20250114 20:57:23.495170 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:57:23.495761 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:23.496063 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 7981cb69b8154a7eafd5fcb1a795c598: stopping tablet replica
I20250114 20:57:23.513746 20370 master.cc:559] Master@127.19.228.190:41465 shutdown complete.
[       OK ] AutoRebalancerTest.MovesScheduledIfAddTserver (26646 ms)
[ RUN      ] AutoRebalancerTest.NoReplicaMovesIfNoTservers
I20250114 20:57:23.546157 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:36531
I20250114 20:57:23.547176 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:23.553010 22482 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:23.553211 22481 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:23.553813 22484 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:23.554088 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:23.555683 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:23.555863 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:23.555986 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888243555971 us; error 0 us; skew 500 ppm
I20250114 20:57:23.556382 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:23.562196 20370 webserver.cc:458] Webserver started at http://127.19.228.190:40041/ using document root <none> and password file <none>
I20250114 20:57:23.562618 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:23.562757 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:23.562979 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:23.564242 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTservers.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "73bd688b8b3541eca80385920846f9d6"
format_stamp: "Formatted at 2025-01-14 20:57:23 on dist-test-slave-kc3q"
I20250114 20:57:23.568374 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:57:23.571426 22489 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:23.572161 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:57:23.572412 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTservers.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "73bd688b8b3541eca80385920846f9d6"
format_stamp: "Formatted at 2025-01-14 20:57:23 on dist-test-slave-kc3q"
I20250114 20:57:23.572651 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTservers.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTservers.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTservers.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:23.632230 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:23.633364 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:23.667649 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:36531
I20250114 20:57:23.667738 22540 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:36531 every 8 connection(s)
I20250114 20:57:23.671173 22541 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:23.680833 22541 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6: Bootstrap starting.
I20250114 20:57:23.684958 22541 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6: Neither blocks nor log segments found. Creating new log.
I20250114 20:57:23.688674 22541 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6: No bootstrap required, opened a new log
I20250114 20:57:23.690580 22541 raft_consensus.cc:357] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "73bd688b8b3541eca80385920846f9d6" member_type: VOTER }
I20250114 20:57:23.691006 22541 raft_consensus.cc:383] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:57:23.691185 22541 raft_consensus.cc:738] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 73bd688b8b3541eca80385920846f9d6, State: Initialized, Role: FOLLOWER
I20250114 20:57:23.691694 22541 consensus_queue.cc:260] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "73bd688b8b3541eca80385920846f9d6" member_type: VOTER }
I20250114 20:57:23.692133 22541 raft_consensus.cc:397] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:57:23.692322 22541 raft_consensus.cc:491] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:57:23.692531 22541 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:57:23.696431 22541 raft_consensus.cc:513] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "73bd688b8b3541eca80385920846f9d6" member_type: VOTER }
I20250114 20:57:23.696852 22541 leader_election.cc:304] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 73bd688b8b3541eca80385920846f9d6; no voters: 
I20250114 20:57:23.697784 22541 leader_election.cc:290] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:57:23.698138 22544 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:57:23.699360 22544 raft_consensus.cc:695] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 1 LEADER]: Becoming Leader. State: Replica: 73bd688b8b3541eca80385920846f9d6, State: Running, Role: LEADER
I20250114 20:57:23.699952 22544 consensus_queue.cc:237] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "73bd688b8b3541eca80385920846f9d6" member_type: VOTER }
I20250114 20:57:23.700533 22541 sys_catalog.cc:564] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:57:23.704393 22546 sys_catalog.cc:455] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 73bd688b8b3541eca80385920846f9d6. Latest consensus state: current_term: 1 leader_uuid: "73bd688b8b3541eca80385920846f9d6" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "73bd688b8b3541eca80385920846f9d6" member_type: VOTER } }
I20250114 20:57:23.704604 22545 sys_catalog.cc:455] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "73bd688b8b3541eca80385920846f9d6" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "73bd688b8b3541eca80385920846f9d6" member_type: VOTER } }
I20250114 20:57:23.705080 22546 sys_catalog.cc:458] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [sys.catalog]: This master's current role is: LEADER
I20250114 20:57:23.705181 22545 sys_catalog.cc:458] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [sys.catalog]: This master's current role is: LEADER
I20250114 20:57:23.708246 22551 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:57:23.714113 22551 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:57:23.714943 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:57:23.721786 22551 catalog_manager.cc:1348] Generated new cluster ID: 853ce1e77038400082258bf62333ce89
I20250114 20:57:23.722069 22551 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:57:23.744989 22551 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:57:23.746146 22551 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:57:23.760867 22551 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6: Generated new TSK 0
I20250114 20:57:23.761404 22551 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:57:23.780145 20370 internal_mini_cluster.cc:371] 0 TS(s) registered with all masters after 0.000164429s
I20250114 20:57:35.871577 20370 master.cc:537] Master@127.19.228.190:36531 shutting down...
I20250114 20:57:35.892544 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:57:35.893286 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:35.893643 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 73bd688b8b3541eca80385920846f9d6: stopping tablet replica
I20250114 20:57:35.911975 20370 master.cc:559] Master@127.19.228.190:36531 shutdown complete.
[       OK ] AutoRebalancerTest.NoReplicaMovesIfNoTservers (12393 ms)
[ RUN      ] AutoRebalancerTest.NoReplicaMovesIfNoTablets
I20250114 20:57:35.940007 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:33343
I20250114 20:57:35.941021 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:35.946079 22565 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:35.946422 22566 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:35.947515 22568 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:35.947889 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:35.948573 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:35.948746 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:35.948864 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888255948853 us; error 0 us; skew 500 ppm
I20250114 20:57:35.949297 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:35.951519 20370 webserver.cc:458] Webserver started at http://127.19.228.190:41537/ using document root <none> and password file <none>
I20250114 20:57:35.951968 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:35.952116 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:35.952334 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:35.953451 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "68e322d64664403da622c4bcc5e20834"
format_stamp: "Formatted at 2025-01-14 20:57:35 on dist-test-slave-kc3q"
I20250114 20:57:35.957646 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.003s	sys 0.002s
I20250114 20:57:35.960566 22573 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:35.961241 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:57:35.961464 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "68e322d64664403da622c4bcc5e20834"
format_stamp: "Formatted at 2025-01-14 20:57:35 on dist-test-slave-kc3q"
I20250114 20:57:35.961683 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:35.975893 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:35.976935 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:36.011247 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:33343
I20250114 20:57:36.011341 22624 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:33343 every 8 connection(s)
I20250114 20:57:36.014979 22625 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:36.025604 22625 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834: Bootstrap starting.
I20250114 20:57:36.030009 22625 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834: Neither blocks nor log segments found. Creating new log.
I20250114 20:57:36.033877 22625 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834: No bootstrap required, opened a new log
I20250114 20:57:36.035919 22625 raft_consensus.cc:357] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "68e322d64664403da622c4bcc5e20834" member_type: VOTER }
I20250114 20:57:36.036375 22625 raft_consensus.cc:383] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:57:36.036589 22625 raft_consensus.cc:738] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 68e322d64664403da622c4bcc5e20834, State: Initialized, Role: FOLLOWER
I20250114 20:57:36.037091 22625 consensus_queue.cc:260] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "68e322d64664403da622c4bcc5e20834" member_type: VOTER }
I20250114 20:57:36.037513 22625 raft_consensus.cc:397] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:57:36.037712 22625 raft_consensus.cc:491] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:57:36.037964 22625 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:57:36.041960 22625 raft_consensus.cc:513] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "68e322d64664403da622c4bcc5e20834" member_type: VOTER }
I20250114 20:57:36.042425 22625 leader_election.cc:304] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 68e322d64664403da622c4bcc5e20834; no voters: 
I20250114 20:57:36.043475 22625 leader_election.cc:290] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:57:36.043890 22628 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:57:36.045102 22628 raft_consensus.cc:695] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 1 LEADER]: Becoming Leader. State: Replica: 68e322d64664403da622c4bcc5e20834, State: Running, Role: LEADER
I20250114 20:57:36.045665 22628 consensus_queue.cc:237] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "68e322d64664403da622c4bcc5e20834" member_type: VOTER }
I20250114 20:57:36.046326 22625 sys_catalog.cc:564] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:57:36.048404 22629 sys_catalog.cc:455] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "68e322d64664403da622c4bcc5e20834" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "68e322d64664403da622c4bcc5e20834" member_type: VOTER } }
I20250114 20:57:36.049114 22629 sys_catalog.cc:458] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [sys.catalog]: This master's current role is: LEADER
I20250114 20:57:36.048452 22630 sys_catalog.cc:455] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 68e322d64664403da622c4bcc5e20834. Latest consensus state: current_term: 1 leader_uuid: "68e322d64664403da622c4bcc5e20834" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "68e322d64664403da622c4bcc5e20834" member_type: VOTER } }
I20250114 20:57:36.050110 22630 sys_catalog.cc:458] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [sys.catalog]: This master's current role is: LEADER
I20250114 20:57:36.052860 22633 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:57:36.059724 22633 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:57:36.060645 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:57:36.067878 22633 catalog_manager.cc:1348] Generated new cluster ID: e48ee6c27c8047a0ac0c0cb523202837
I20250114 20:57:36.068107 22633 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:57:36.080912 22633 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:57:36.082017 22633 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:57:36.094496 22633 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834: Generated new TSK 0
I20250114 20:57:36.094951 22633 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:57:36.127007 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:36.132527 22646 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:36.133694 22647 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:36.134500 22649 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:36.134987 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:36.135715 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:36.135890 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:36.136049 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888256136024 us; error 0 us; skew 500 ppm
I20250114 20:57:36.136512 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:36.138540 20370 webserver.cc:458] Webserver started at http://127.19.228.129:43083/ using document root <none> and password file <none>
I20250114 20:57:36.138952 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:36.139110 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:36.139343 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:36.140450 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "32449db1765042bb8a9ba49111ba9535"
format_stamp: "Formatted at 2025-01-14 20:57:36 on dist-test-slave-kc3q"
I20250114 20:57:36.144487 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:57:36.147441 22654 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.148113 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:57:36.148358 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "32449db1765042bb8a9ba49111ba9535"
format_stamp: "Formatted at 2025-01-14 20:57:36 on dist-test-slave-kc3q"
I20250114 20:57:36.148622 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:36.159456 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:36.160565 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:36.161859 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:36.164116 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:36.164306 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.164517 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:36.164659 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.202039 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:40709
I20250114 20:57:36.202124 22716 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:40709 every 8 connection(s)
I20250114 20:57:36.206204 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:36.212781 22721 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:36.214244 22722 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:36.215727 20370 server_base.cc:1034] running on GCE node
W20250114 20:57:36.217005 22724 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:36.217775 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:36.218027 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:36.218181 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888256218164 us; error 0 us; skew 500 ppm
I20250114 20:57:36.218746 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:36.219218 22717 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33343
I20250114 20:57:36.219622 22717 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:36.220409 22717 heartbeater.cc:510] Master 127.19.228.190:33343 requested a full tablet report, sending...
I20250114 20:57:36.221470 20370 webserver.cc:458] Webserver started at http://127.19.228.130:37463/ using document root <none> and password file <none>
I20250114 20:57:36.221997 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:36.222211 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:36.222532 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:36.222492 22590 ts_manager.cc:194] Registered new tserver with Master: 32449db1765042bb8a9ba49111ba9535 (127.19.228.129:40709)
I20250114 20:57:36.224251 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "f200dd3e36004041ba3b16f999cb59bb"
format_stamp: "Formatted at 2025-01-14 20:57:36 on dist-test-slave-kc3q"
I20250114 20:57:36.225015 22590 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:52062
I20250114 20:57:36.228861 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:57:36.231917 22729 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.232618 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:57:36.232859 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "f200dd3e36004041ba3b16f999cb59bb"
format_stamp: "Formatted at 2025-01-14 20:57:36 on dist-test-slave-kc3q"
I20250114 20:57:36.233091 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:36.275578 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:36.276685 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:36.278010 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:36.280059 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:36.280226 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.280447 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:36.280581 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.316483 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:41019
I20250114 20:57:36.316566 22791 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:41019 every 8 connection(s)
I20250114 20:57:36.320818 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:36.327857 22795 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:36.328586 22796 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:36.330754 22798 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:36.331290 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:36.332335 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:36.332553 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:36.332742 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888256332723 us; error 0 us; skew 500 ppm
I20250114 20:57:36.332909 22792 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33343
I20250114 20:57:36.333276 22792 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:36.333312 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:36.334082 22792 heartbeater.cc:510] Master 127.19.228.190:33343 requested a full tablet report, sending...
I20250114 20:57:36.336071 20370 webserver.cc:458] Webserver started at http://127.19.228.131:42761/ using document root <none> and password file <none>
I20250114 20:57:36.336051 22590 ts_manager.cc:194] Registered new tserver with Master: f200dd3e36004041ba3b16f999cb59bb (127.19.228.130:41019)
I20250114 20:57:36.336673 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:36.336864 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:36.337095 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:36.337524 22590 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:52076
I20250114 20:57:36.338229 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "616c7f22c0b64ef49a465656ea4fb910"
format_stamp: "Formatted at 2025-01-14 20:57:36 on dist-test-slave-kc3q"
I20250114 20:57:36.342396 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.001s	sys 0.005s
I20250114 20:57:36.345273 22803 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.345939 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.000s
I20250114 20:57:36.346186 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "616c7f22c0b64ef49a465656ea4fb910"
format_stamp: "Formatted at 2025-01-14 20:57:36 on dist-test-slave-kc3q"
I20250114 20:57:36.346437 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfNoTablets.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:36.371706 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:36.372793 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:36.374101 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:36.376150 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:36.376344 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.376549 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:36.376693 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:36.412390 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:46527
I20250114 20:57:36.412490 22865 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:46527 every 8 connection(s)
I20250114 20:57:36.424566 22866 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33343
I20250114 20:57:36.424904 22866 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:36.425556 22866 heartbeater.cc:510] Master 127.19.228.190:33343 requested a full tablet report, sending...
I20250114 20:57:36.427277 22590 ts_manager.cc:194] Registered new tserver with Master: 616c7f22c0b64ef49a465656ea4fb910 (127.19.228.131:46527)
I20250114 20:57:36.428342 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.013094356s
I20250114 20:57:36.428656 22590 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:52080
I20250114 20:57:37.227669 22717 heartbeater.cc:502] Master 127.19.228.190:33343 was elected leader, sending a full tablet report...
I20250114 20:57:37.339751 22792 heartbeater.cc:502] Master 127.19.228.190:33343 was elected leader, sending a full tablet report...
I20250114 20:57:37.430985 22866 heartbeater.cc:502] Master 127.19.228.190:33343 was elected leader, sending a full tablet report...
I20250114 20:57:38.059973 22643 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:40.060756 22643 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:42.061666 22643 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
W20250114 20:57:42.998863 22788 debug-util.cc:398] Leaking SignalData structure 0x7b080012edc0 after lost signal to thread 20373
W20250114 20:57:42.999688 22788 debug-util.cc:398] Leaking SignalData structure 0x7b0800091340 after lost signal to thread 22624
W20250114 20:57:43.000492 22788 debug-util.cc:398] Leaking SignalData structure 0x7b0800048580 after lost signal to thread 22716
W20250114 20:57:43.001354 22788 debug-util.cc:398] Leaking SignalData structure 0x7b08002408e0 after lost signal to thread 22791
W20250114 20:57:43.002074 22788 debug-util.cc:398] Leaking SignalData structure 0x7b080021c9a0 after lost signal to thread 22865
I20250114 20:57:44.062717 22643 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
W20250114 20:57:44.462222 22862 debug-util.cc:398] Leaking SignalData structure 0x7b0800030660 after lost signal to thread 20373
W20250114 20:57:44.463016 22862 debug-util.cc:398] Leaking SignalData structure 0x7b08001d33c0 after lost signal to thread 22624
W20250114 20:57:44.463929 22862 debug-util.cc:398] Leaking SignalData structure 0x7b080022da20 after lost signal to thread 22716
W20250114 20:57:44.464721 22862 debug-util.cc:398] Leaking SignalData structure 0x7b080004c580 after lost signal to thread 22791
W20250114 20:57:44.465457 22862 debug-util.cc:398] Leaking SignalData structure 0x7b08000a8f60 after lost signal to thread 22865
I20250114 20:57:46.064038 22643 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:48.065292 22643 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:48.225024 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:57:48.245612 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:48.261917 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:57:48.270608 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:57:48.287703 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:48.318382 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:57:48.326344 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:57:48.345165 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:57:48.376852 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:57:48.384893 20370 master.cc:537] Master@127.19.228.190:33343 shutting down...
I20250114 20:57:48.401136 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:57:48.401899 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:57:48.402235 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 68e322d64664403da622c4bcc5e20834: stopping tablet replica
I20250114 20:57:48.419493 20370 master.cc:559] Master@127.19.228.190:33343 shutdown complete.
[       OK ] AutoRebalancerTest.NoReplicaMovesIfNoTablets (12506 ms)
[ RUN      ] AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne
I20250114 20:57:48.446561 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:33593
I20250114 20:57:48.447715 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:48.452818 22871 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.453205 22872 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.454195 22874 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.454809 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:48.455631 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:48.455834 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:48.455968 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888268455951 us; error 0 us; skew 500 ppm
I20250114 20:57:48.456403 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:48.458664 20370 webserver.cc:458] Webserver started at http://127.19.228.190:42301/ using document root <none> and password file <none>
I20250114 20:57:48.459091 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:48.459259 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:48.459499 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:48.460791 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "249cd2a167b3486da1317c1dddd922a3"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.464926 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:57:48.467913 22879 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.468621 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:57:48.468894 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "249cd2a167b3486da1317c1dddd922a3"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.469156 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:48.495806 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:48.496963 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:48.531471 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:33593
I20250114 20:57:48.531586 22930 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:33593 every 8 connection(s)
I20250114 20:57:48.535123 22931 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:48.545222 22931 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3: Bootstrap starting.
I20250114 20:57:48.549501 22931 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3: Neither blocks nor log segments found. Creating new log.
I20250114 20:57:48.553246 22931 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3: No bootstrap required, opened a new log
I20250114 20:57:48.555135 22931 raft_consensus.cc:357] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "249cd2a167b3486da1317c1dddd922a3" member_type: VOTER }
I20250114 20:57:48.555609 22931 raft_consensus.cc:383] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:57:48.555792 22931 raft_consensus.cc:738] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 249cd2a167b3486da1317c1dddd922a3, State: Initialized, Role: FOLLOWER
I20250114 20:57:48.556262 22931 consensus_queue.cc:260] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "249cd2a167b3486da1317c1dddd922a3" member_type: VOTER }
I20250114 20:57:48.556687 22931 raft_consensus.cc:397] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:57:48.556864 22931 raft_consensus.cc:491] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:57:48.557073 22931 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:57:48.561179 22931 raft_consensus.cc:513] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "249cd2a167b3486da1317c1dddd922a3" member_type: VOTER }
I20250114 20:57:48.561615 22931 leader_election.cc:304] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 249cd2a167b3486da1317c1dddd922a3; no voters: 
I20250114 20:57:48.562572 22931 leader_election.cc:290] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:57:48.562927 22934 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:57:48.564133 22934 raft_consensus.cc:695] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 1 LEADER]: Becoming Leader. State: Replica: 249cd2a167b3486da1317c1dddd922a3, State: Running, Role: LEADER
I20250114 20:57:48.564689 22934 consensus_queue.cc:237] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "249cd2a167b3486da1317c1dddd922a3" member_type: VOTER }
I20250114 20:57:48.565326 22931 sys_catalog.cc:564] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:57:48.568871 22935 sys_catalog.cc:455] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "249cd2a167b3486da1317c1dddd922a3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "249cd2a167b3486da1317c1dddd922a3" member_type: VOTER } }
I20250114 20:57:48.569023 22936 sys_catalog.cc:455] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 249cd2a167b3486da1317c1dddd922a3. Latest consensus state: current_term: 1 leader_uuid: "249cd2a167b3486da1317c1dddd922a3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "249cd2a167b3486da1317c1dddd922a3" member_type: VOTER } }
I20250114 20:57:48.569505 22935 sys_catalog.cc:458] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [sys.catalog]: This master's current role is: LEADER
I20250114 20:57:48.569646 22936 sys_catalog.cc:458] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [sys.catalog]: This master's current role is: LEADER
I20250114 20:57:48.572299 22942 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:57:48.577994 22942 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:57:48.578881 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:57:48.585727 22942 catalog_manager.cc:1348] Generated new cluster ID: e62da3fea16745db9f29ee68b618f27a
I20250114 20:57:48.585987 22942 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:57:48.611109 22942 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:57:48.612306 22942 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:57:48.624992 22942 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3: Generated new TSK 0
I20250114 20:57:48.625550 22942 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:57:48.644920 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:48.650278 22952 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.651365 22953 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.652503 20370 server_base.cc:1034] running on GCE node
W20250114 20:57:48.653033 22955 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.653769 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:48.653960 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:48.654100 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888268654084 us; error 0 us; skew 500 ppm
I20250114 20:57:48.654551 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:48.656646 20370 webserver.cc:458] Webserver started at http://127.19.228.129:44181/ using document root <none> and password file <none>
I20250114 20:57:48.657048 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:48.657213 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:48.657438 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:48.658524 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "eb7b942afd3d4a4c89abadd648d8cba0"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.662505 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.002s	sys 0.003s
I20250114 20:57:48.665351 22960 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.665994 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.000s	sys 0.002s
I20250114 20:57:48.666258 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "eb7b942afd3d4a4c89abadd648d8cba0"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.666563 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:48.676930 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:48.677932 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:48.679198 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:48.681753 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:48.681919 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.682126 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:48.682261 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.718331 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:44879
I20250114 20:57:48.718411 23022 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:44879 every 8 connection(s)
I20250114 20:57:48.722478 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:48.729566 23027 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.731340 23028 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.733212 23030 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.733914 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:48.734187 23023 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33593
I20250114 20:57:48.734575 23023 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:48.734872 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:48.735162 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:48.735375 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888268735355 us; error 0 us; skew 500 ppm
I20250114 20:57:48.735427 23023 heartbeater.cc:510] Master 127.19.228.190:33593 requested a full tablet report, sending...
I20250114 20:57:48.736093 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:48.737563 22896 ts_manager.cc:194] Registered new tserver with Master: eb7b942afd3d4a4c89abadd648d8cba0 (127.19.228.129:44879)
I20250114 20:57:48.738656 20370 webserver.cc:458] Webserver started at http://127.19.228.130:43053/ using document root <none> and password file <none>
I20250114 20:57:48.739080 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:48.739240 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:48.739385 22896 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:50498
I20250114 20:57:48.739470 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:48.740991 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "67079501e0e9432c8f5a6ad44e7860b1"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.745187 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:57:48.748109 23035 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.748764 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:57:48.749003 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "67079501e0e9432c8f5a6ad44e7860b1"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.749248 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:48.791105 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:48.792179 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:48.793486 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:48.795526 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:48.795723 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.795929 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:48.796072 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.831123 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:37037
I20250114 20:57:48.831210 23097 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:37037 every 8 connection(s)
I20250114 20:57:48.835494 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:48.842008 23101 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.844897 23102 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.845206 23104 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.845629 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:48.846084 23098 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33593
I20250114 20:57:48.846490 23098 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:48.846706 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:57:48.846958 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:48.847151 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888268847133 us; error 0 us; skew 500 ppm
I20250114 20:57:48.847297 23098 heartbeater.cc:510] Master 127.19.228.190:33593 requested a full tablet report, sending...
I20250114 20:57:48.847853 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:48.849298 22896 ts_manager.cc:194] Registered new tserver with Master: 67079501e0e9432c8f5a6ad44e7860b1 (127.19.228.130:37037)
I20250114 20:57:48.850356 20370 webserver.cc:458] Webserver started at http://127.19.228.131:33359/ using document root <none> and password file <none>
I20250114 20:57:48.850739 22896 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:50514
I20250114 20:57:48.850845 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:48.851106 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:48.851351 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:48.852838 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "bd3f3bb7934341988ccb0675e2c66ed9"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.856817 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.000s	sys 0.005s
I20250114 20:57:48.859607 23109 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.860268 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.001s
I20250114 20:57:48.860493 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "bd3f3bb7934341988ccb0675e2c66ed9"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.860719 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:48.872946 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:48.873958 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:48.875183 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:48.877239 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:48.877413 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.877629 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:48.877761 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.913802 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:42133
I20250114 20:57:48.913880 23171 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:42133 every 8 connection(s)
I20250114 20:57:48.918424 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:57:48.925451 23175 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:57:48.926429 23176 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.929708 20370 server_base.cc:1034] running on GCE node
I20250114 20:57:48.930327 23172 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33593
W20250114 20:57:48.930586 23178 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:57:48.930716 23172 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:48.931453 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
I20250114 20:57:48.931499 23172 heartbeater.cc:510] Master 127.19.228.190:33593 requested a full tablet report, sending...
W20250114 20:57:48.931767 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:57:48.931983 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888268931967 us; error 0 us; skew 500 ppm
I20250114 20:57:48.932591 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:57:48.933368 22896 ts_manager.cc:194] Registered new tserver with Master: bd3f3bb7934341988ccb0675e2c66ed9 (127.19.228.131:42133)
I20250114 20:57:48.934792 22896 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:50518
I20250114 20:57:48.935019 20370 webserver.cc:458] Webserver started at http://127.19.228.132:45237/ using document root <none> and password file <none>
I20250114 20:57:48.935472 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:57:48.935711 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:57:48.935993 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:57:48.937058 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "3ad1ab42c00948cda9d60bbab5e5542c"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.941095 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:57:48.943905 23183 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.944581 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:57:48.944831 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "3ad1ab42c00948cda9d60bbab5e5542c"
format_stamp: "Formatted at 2025-01-14 20:57:48 on dist-test-slave-kc3q"
I20250114 20:57:48.945068 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:57:48.972303 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:57:48.973402 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:57:48.974568 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:57:48.976630 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:57:48.976819 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:48.977020 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:57:48.977173 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:57:49.013679 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:41443
I20250114 20:57:49.013731 23245 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:41443 every 8 connection(s)
I20250114 20:57:49.025856 23246 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33593
I20250114 20:57:49.026171 23246 heartbeater.cc:463] Registering TS with master...
I20250114 20:57:49.026791 23246 heartbeater.cc:510] Master 127.19.228.190:33593 requested a full tablet report, sending...
I20250114 20:57:49.028442 22896 ts_manager.cc:194] Registered new tserver with Master: 3ad1ab42c00948cda9d60bbab5e5542c (127.19.228.132:41443)
I20250114 20:57:49.028842 20370 internal_mini_cluster.cc:371] 4 TS(s) registered with all masters after 0.012124986s
I20250114 20:57:49.029659 22896 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:50534
I20250114 20:57:49.742107 23023 heartbeater.cc:502] Master 127.19.228.190:33593 was elected leader, sending a full tablet report...
I20250114 20:57:49.853181 23098 heartbeater.cc:502] Master 127.19.228.190:33593 was elected leader, sending a full tablet report...
I20250114 20:57:49.937001 23172 heartbeater.cc:502] Master 127.19.228.190:33593 was elected leader, sending a full tablet report...
I20250114 20:57:50.031877 23246 heartbeater.cc:502] Master 127.19.228.190:33593 was elected leader, sending a full tablet report...
I20250114 20:57:50.071878 20370 test_util.cc:274] Using random seed: -799397839
I20250114 20:57:50.090991 22896 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:50546:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20250114 20:57:50.130777 23211 tablet_service.cc:1467] Processing CreateTablet for tablet 1049736b9c034067a841f29a41dee5a9 (DEFAULT_TABLE table=test-workload [id=111294f7e75d4d269be5025263560326]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:57:50.132193 23211 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1049736b9c034067a841f29a41dee5a9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:50.144625 23137 tablet_service.cc:1467] Processing CreateTablet for tablet 1049736b9c034067a841f29a41dee5a9 (DEFAULT_TABLE table=test-workload [id=111294f7e75d4d269be5025263560326]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:57:50.145124 23063 tablet_service.cc:1467] Processing CreateTablet for tablet 1049736b9c034067a841f29a41dee5a9 (DEFAULT_TABLE table=test-workload [id=111294f7e75d4d269be5025263560326]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:57:50.146027 23137 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1049736b9c034067a841f29a41dee5a9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:50.146234 23063 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1049736b9c034067a841f29a41dee5a9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:57:50.150032 23267 tablet_bootstrap.cc:492] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: Bootstrap starting.
I20250114 20:57:50.155663 23267 tablet_bootstrap.cc:654] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: Neither blocks nor log segments found. Creating new log.
I20250114 20:57:50.162256 23269 tablet_bootstrap.cc:492] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9: Bootstrap starting.
I20250114 20:57:50.163568 23270 tablet_bootstrap.cc:492] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1: Bootstrap starting.
I20250114 20:57:50.164376 23267 tablet_bootstrap.cc:492] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: No bootstrap required, opened a new log
I20250114 20:57:50.164754 23267 ts_tablet_manager.cc:1397] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: Time spent bootstrapping tablet: real 0.015s	user 0.003s	sys 0.008s
I20250114 20:57:50.166836 23267 raft_consensus.cc:357] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.167745 23267 raft_consensus.cc:383] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:57:50.168030 23267 raft_consensus.cc:738] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3ad1ab42c00948cda9d60bbab5e5542c, State: Initialized, Role: FOLLOWER
I20250114 20:57:50.168751 23267 consensus_queue.cc:260] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.169744 23270 tablet_bootstrap.cc:654] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1: Neither blocks nor log segments found. Creating new log.
I20250114 20:57:50.169749 23269 tablet_bootstrap.cc:654] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9: Neither blocks nor log segments found. Creating new log.
I20250114 20:57:50.171306 23267 ts_tablet_manager.cc:1428] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: Time spent starting tablet: real 0.006s	user 0.002s	sys 0.004s
I20250114 20:57:50.175146 23269 tablet_bootstrap.cc:492] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9: No bootstrap required, opened a new log
I20250114 20:57:50.175531 23269 ts_tablet_manager.cc:1397] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9: Time spent bootstrapping tablet: real 0.013s	user 0.010s	sys 0.000s
I20250114 20:57:50.176404 23270 tablet_bootstrap.cc:492] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1: No bootstrap required, opened a new log
I20250114 20:57:50.176836 23270 ts_tablet_manager.cc:1397] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1: Time spent bootstrapping tablet: real 0.014s	user 0.001s	sys 0.010s
I20250114 20:57:50.177826 23269 raft_consensus.cc:357] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.178646 23269 raft_consensus.cc:383] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:57:50.178933 23269 raft_consensus.cc:738] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: bd3f3bb7934341988ccb0675e2c66ed9, State: Initialized, Role: FOLLOWER
I20250114 20:57:50.179077 23270 raft_consensus.cc:357] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.179764 23270 raft_consensus.cc:383] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:57:50.180106 23270 raft_consensus.cc:738] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 67079501e0e9432c8f5a6ad44e7860b1, State: Initialized, Role: FOLLOWER
I20250114 20:57:50.179661 23269 consensus_queue.cc:260] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.180768 23270 consensus_queue.cc:260] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.182708 23269 ts_tablet_manager.cc:1428] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20250114 20:57:50.185725 23270 ts_tablet_manager.cc:1428] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1: Time spent starting tablet: real 0.009s	user 0.007s	sys 0.002s
I20250114 20:57:50.253190 23275 raft_consensus.cc:491] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:57:50.253618 23275 raft_consensus.cc:513] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.255492 23275 leader_election.cc:290] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 3ad1ab42c00948cda9d60bbab5e5542c (127.19.228.132:41443), bd3f3bb7934341988ccb0675e2c66ed9 (127.19.228.131:42133)
I20250114 20:57:50.265651 23221 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1049736b9c034067a841f29a41dee5a9" candidate_uuid: "67079501e0e9432c8f5a6ad44e7860b1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" is_pre_election: true
I20250114 20:57:50.266453 23221 raft_consensus.cc:2463] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 67079501e0e9432c8f5a6ad44e7860b1 in term 0.
I20250114 20:57:50.266446 23147 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1049736b9c034067a841f29a41dee5a9" candidate_uuid: "67079501e0e9432c8f5a6ad44e7860b1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" is_pre_election: true
I20250114 20:57:50.267390 23147 raft_consensus.cc:2463] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 67079501e0e9432c8f5a6ad44e7860b1 in term 0.
I20250114 20:57:50.267870 23036 leader_election.cc:304] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3ad1ab42c00948cda9d60bbab5e5542c, 67079501e0e9432c8f5a6ad44e7860b1; no voters: 
W20250114 20:57:50.268345 23247 tablet.cc:2367] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250114 20:57:50.268711 23275 raft_consensus.cc:2798] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:57:50.269008 23275 raft_consensus.cc:491] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:57:50.269241 23275 raft_consensus.cc:3054] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:57:50.273896 23275 raft_consensus.cc:513] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.275300 23275 leader_election.cc:290] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [CANDIDATE]: Term 1 election: Requested vote from peers 3ad1ab42c00948cda9d60bbab5e5542c (127.19.228.132:41443), bd3f3bb7934341988ccb0675e2c66ed9 (127.19.228.131:42133)
I20250114 20:57:50.275936 23221 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1049736b9c034067a841f29a41dee5a9" candidate_uuid: "67079501e0e9432c8f5a6ad44e7860b1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3ad1ab42c00948cda9d60bbab5e5542c"
I20250114 20:57:50.276247 23147 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1049736b9c034067a841f29a41dee5a9" candidate_uuid: "67079501e0e9432c8f5a6ad44e7860b1" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "bd3f3bb7934341988ccb0675e2c66ed9"
I20250114 20:57:50.276513 23221 raft_consensus.cc:3054] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:57:50.276789 23147 raft_consensus.cc:3054] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:57:50.283030 23221 raft_consensus.cc:2463] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 67079501e0e9432c8f5a6ad44e7860b1 in term 1.
I20250114 20:57:50.283087 23147 raft_consensus.cc:2463] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 67079501e0e9432c8f5a6ad44e7860b1 in term 1.
I20250114 20:57:50.284173 23036 leader_election.cc:304] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3ad1ab42c00948cda9d60bbab5e5542c, 67079501e0e9432c8f5a6ad44e7860b1; no voters: 
I20250114 20:57:50.284761 23275 raft_consensus.cc:2798] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:57:50.285669 23275 raft_consensus.cc:695] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 1 LEADER]: Becoming Leader. State: Replica: 67079501e0e9432c8f5a6ad44e7860b1, State: Running, Role: LEADER
I20250114 20:57:50.286348 23275 consensus_queue.cc:237] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } }
I20250114 20:57:50.293072 22895 catalog_manager.cc:5526] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 reported cstate change: term changed from 0 to 1, leader changed from <none> to 67079501e0e9432c8f5a6ad44e7860b1 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "67079501e0e9432c8f5a6ad44e7860b1" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "67079501e0e9432c8f5a6ad44e7860b1" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 37037 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 } health_report { overall_health: UNKNOWN } } }
I20250114 20:57:50.578179 22949 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:50.579479 22949 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:50.579823 22949 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:50.873934 23280 consensus_queue.cc:1035] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "3ad1ab42c00948cda9d60bbab5e5542c" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41443 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:57:50.885072 23283 consensus_queue.cc:1035] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [LEADER]: Connected to new peer: Peer: permanent_uuid: "bd3f3bb7934341988ccb0675e2c66ed9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42133 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:57:52.580669 22949 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:52.581849 22949 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:52.582095 22949 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:54.583282 22949 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:54.585026 22949 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:54.585278 22949 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:56.586094 22949 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:56.587256 22949 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:56.587498 22949 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:57:58.588377 22949 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:57:58.589552 22949 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:57:58.589793 22949 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:00.590682 22949 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:00.591895 22949 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:00.592146 22949 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:01.397504 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:58:01.418573 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:01.435014 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:58:01.444141 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:58:01.462098 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:01.462723 20370 tablet_replica.cc:331] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1: stopping tablet replica
I20250114 20:58:01.463217 20370 raft_consensus.cc:2238] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:01.464161 20370 raft_consensus.cc:2267] T 1049736b9c034067a841f29a41dee5a9 P 67079501e0e9432c8f5a6ad44e7860b1 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:01.482731 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:58:01.492395 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:58:01.510308 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:01.510927 20370 tablet_replica.cc:331] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9: stopping tablet replica
I20250114 20:58:01.511678 20370 raft_consensus.cc:2238] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:01.512125 20370 raft_consensus.cc:2267] T 1049736b9c034067a841f29a41dee5a9 P bd3f3bb7934341988ccb0675e2c66ed9 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:01.530041 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:58:01.539793 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:58:01.558446 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:01.559029 20370 tablet_replica.cc:331] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c: stopping tablet replica
I20250114 20:58:01.559511 20370 raft_consensus.cc:2238] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:01.559983 20370 raft_consensus.cc:2267] T 1049736b9c034067a841f29a41dee5a9 P 3ad1ab42c00948cda9d60bbab5e5542c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:01.577744 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
I20250114 20:58:01.587361 20370 master.cc:537] Master@127.19.228.190:33593 shutting down...
I20250114 20:58:01.603085 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:01.603720 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:01.604035 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 249cd2a167b3486da1317c1dddd922a3: stopping tablet replica
I20250114 20:58:01.621784 20370 master.cc:559] Master@127.19.228.190:33593 shutdown complete.
[       OK ] AutoRebalancerTest.NoReplicaMovesIfLocationLoadSkewedByOne (13208 ms)
[ RUN      ] AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy
I20250114 20:58:01.654767 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:46741
I20250114 20:58:01.655843 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:01.661141 23293 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:01.661665 23294 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:01.661979 23296 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:01.663050 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:01.663712 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:01.663882 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:01.663990 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888281663980 us; error 0 us; skew 500 ppm
I20250114 20:58:01.664390 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:01.672995 20370 webserver.cc:458] Webserver started at http://127.19.228.190:39443/ using document root <none> and password file <none>
I20250114 20:58:01.673444 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:01.673589 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:01.673799 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:01.674925 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "5a1cb7e676e04c1fb80029ff78da9c39"
format_stamp: "Formatted at 2025-01-14 20:58:01 on dist-test-slave-kc3q"
I20250114 20:58:01.679098 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.002s
I20250114 20:58:01.682157 23301 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:01.682843 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:58:01.683064 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "5a1cb7e676e04c1fb80029ff78da9c39"
format_stamp: "Formatted at 2025-01-14 20:58:01 on dist-test-slave-kc3q"
I20250114 20:58:01.683300 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:01.697902 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:01.699009 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:01.733561 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:46741
I20250114 20:58:01.733654 23352 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:46741 every 8 connection(s)
I20250114 20:58:01.737228 23353 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:01.747282 23353 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39: Bootstrap starting.
I20250114 20:58:01.751925 23353 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:01.755857 23353 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39: No bootstrap required, opened a new log
I20250114 20:58:01.757822 23353 raft_consensus.cc:357] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" member_type: VOTER }
I20250114 20:58:01.758283 23353 raft_consensus.cc:383] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:01.758491 23353 raft_consensus.cc:738] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5a1cb7e676e04c1fb80029ff78da9c39, State: Initialized, Role: FOLLOWER
I20250114 20:58:01.759003 23353 consensus_queue.cc:260] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" member_type: VOTER }
I20250114 20:58:01.759436 23353 raft_consensus.cc:397] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:58:01.759672 23353 raft_consensus.cc:491] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:58:01.759927 23353 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:01.764072 23353 raft_consensus.cc:513] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" member_type: VOTER }
I20250114 20:58:01.764560 23353 leader_election.cc:304] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5a1cb7e676e04c1fb80029ff78da9c39; no voters: 
I20250114 20:58:01.765589 23353 leader_election.cc:290] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:58:01.765939 23356 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:01.767148 23356 raft_consensus.cc:695] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 1 LEADER]: Becoming Leader. State: Replica: 5a1cb7e676e04c1fb80029ff78da9c39, State: Running, Role: LEADER
I20250114 20:58:01.767742 23356 consensus_queue.cc:237] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" member_type: VOTER }
I20250114 20:58:01.768347 23353 sys_catalog.cc:564] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:58:01.770565 23358 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 5a1cb7e676e04c1fb80029ff78da9c39. Latest consensus state: current_term: 1 leader_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" member_type: VOTER } }
I20250114 20:58:01.770470 23357 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5a1cb7e676e04c1fb80029ff78da9c39" member_type: VOTER } }
I20250114 20:58:01.771281 23357 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:01.771791 23358 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:01.774138 23362 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:58:01.779116 23362 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:58:01.782011 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:58:01.787361 23362 catalog_manager.cc:1348] Generated new cluster ID: 43d74ac22ec74babbe5d08aefd1a32e7
I20250114 20:58:01.787650 23362 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:58:01.797986 23362 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:58:01.799226 23362 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:58:01.814518 23362 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39: Generated new TSK 0
I20250114 20:58:01.815104 23362 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:58:01.848476 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:01.853822 23374 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:01.854986 23375 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:01.855932 23377 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:01.857048 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:01.857810 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:01.857986 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:01.858107 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888281858097 us; error 0 us; skew 500 ppm
I20250114 20:58:01.858551 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:01.860750 20370 webserver.cc:458] Webserver started at http://127.19.228.129:34561/ using document root <none> and password file <none>
I20250114 20:58:01.861160 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:01.861303 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:01.861518 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:01.862571 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "ada5d4385655434db4a1aac44c4efc90"
format_stamp: "Formatted at 2025-01-14 20:58:01 on dist-test-slave-kc3q"
I20250114 20:58:01.866729 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.000s	sys 0.006s
I20250114 20:58:01.869952 23382 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:01.870632 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:58:01.870890 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "ada5d4385655434db4a1aac44c4efc90"
format_stamp: "Formatted at 2025-01-14 20:58:01 on dist-test-slave-kc3q"
I20250114 20:58:01.871205 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:01.891911 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:01.893000 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:01.894338 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:01.896504 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:01.896687 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:01.896901 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:01.897045 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:01.933431 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:42393
I20250114 20:58:01.933545 23444 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:42393 every 8 connection(s)
I20250114 20:58:01.937709 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:01.944846 23449 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:01.946864 23450 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:01.948470 20370 server_base.cc:1034] running on GCE node
W20250114 20:58:01.948984 23453 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:01.949824 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:01.950038 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:01.950232 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888281950214 us; error 0 us; skew 500 ppm
I20250114 20:58:01.950318 23445 heartbeater.cc:346] Connected to a master server at 127.19.228.190:46741
I20250114 20:58:01.950691 23445 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:01.950778 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:01.951421 23445 heartbeater.cc:510] Master 127.19.228.190:46741 requested a full tablet report, sending...
I20250114 20:58:01.953552 20370 webserver.cc:458] Webserver started at http://127.19.228.130:35397/ using document root <none> and password file <none>
I20250114 20:58:01.953580 23318 ts_manager.cc:194] Registered new tserver with Master: ada5d4385655434db4a1aac44c4efc90 (127.19.228.129:42393)
I20250114 20:58:01.954145 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:01.954353 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:01.954588 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:01.955461 23318 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:58292
I20250114 20:58:01.955857 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "b8513020d7904d268a210ae3fba4c4aa"
format_stamp: "Formatted at 2025-01-14 20:58:01 on dist-test-slave-kc3q"
I20250114 20:58:01.960340 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.000s	sys 0.004s
I20250114 20:58:01.963402 23457 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:01.964125 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:01.964383 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "b8513020d7904d268a210ae3fba4c4aa"
format_stamp: "Formatted at 2025-01-14 20:58:01 on dist-test-slave-kc3q"
I20250114 20:58:01.964649 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:01.975759 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:01.976835 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:01.978204 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:01.980262 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:01.980443 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:01.980646 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:01.980787 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:02.016631 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:39451
I20250114 20:58:02.016726 23519 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:39451 every 8 connection(s)
I20250114 20:58:02.021036 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:02.028024 23523 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:02.028864 23524 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:02.030972 20370 server_base.cc:1034] running on GCE node
W20250114 20:58:02.031894 23526 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:02.032321 23520 heartbeater.cc:346] Connected to a master server at 127.19.228.190:46741
I20250114 20:58:02.032644 23520 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:02.032732 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:02.032970 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:02.033114 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888282033097 us; error 0 us; skew 500 ppm
I20250114 20:58:02.033342 23520 heartbeater.cc:510] Master 127.19.228.190:46741 requested a full tablet report, sending...
I20250114 20:58:02.033656 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:02.035295 23318 ts_manager.cc:194] Registered new tserver with Master: b8513020d7904d268a210ae3fba4c4aa (127.19.228.130:39451)
I20250114 20:58:02.036350 20370 webserver.cc:458] Webserver started at http://127.19.228.131:43461/ using document root <none> and password file <none>
I20250114 20:58:02.036979 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:02.037087 23318 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:58306
I20250114 20:58:02.037230 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:02.037600 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:02.038774 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "a873a31ad7e44a40972b91f340d5f612"
format_stamp: "Formatted at 2025-01-14 20:58:02 on dist-test-slave-kc3q"
I20250114 20:58:02.042887 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.001s	sys 0.004s
I20250114 20:58:02.045782 23531 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:02.046471 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:58:02.046728 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "a873a31ad7e44a40972b91f340d5f612"
format_stamp: "Formatted at 2025-01-14 20:58:02 on dist-test-slave-kc3q"
I20250114 20:58:02.046979 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:02.072084 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:02.073243 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:02.074613 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:02.076767 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:02.076951 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:02.077171 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:02.077343 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:02.112730 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:34173
I20250114 20:58:02.112826 23593 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:34173 every 8 connection(s)
I20250114 20:58:02.124861 23594 heartbeater.cc:346] Connected to a master server at 127.19.228.190:46741
I20250114 20:58:02.125212 23594 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:02.125912 23594 heartbeater.cc:510] Master 127.19.228.190:46741 requested a full tablet report, sending...
I20250114 20:58:02.127650 23318 ts_manager.cc:194] Registered new tserver with Master: a873a31ad7e44a40972b91f340d5f612 (127.19.228.131:34173)
I20250114 20:58:02.127822 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012164788s
I20250114 20:58:02.129412 23318 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:58320
I20250114 20:58:02.957875 23445 heartbeater.cc:502] Master 127.19.228.190:46741 was elected leader, sending a full tablet report...
I20250114 20:58:03.039587 23520 heartbeater.cc:502] Master 127.19.228.190:46741 was elected leader, sending a full tablet report...
I20250114 20:58:03.188576 23594 heartbeater.cc:502] Master 127.19.228.190:46741 was elected leader, sending a full tablet report...
I20250114 20:58:04.151015 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
==================
WARNING: ThreadSanitizer: data race (pid=20370)
  Read of size 1 at 0x7b480012a940 by thread T303 (mutexes: read M1056229661787465760):
    #0 std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::has_value() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:294:22 (libksck.so+0x11173a)
    #1 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:331:19 (libksck.so+0x111b2d)
    #2 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #3 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #4 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #5 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #6 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #7 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #8 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #9 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #10 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #11 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #12 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #13 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #14 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #15 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #16 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #17 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 1 at 0x7b480012a940 by main thread:
    #0 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:324:26 (auto_rebalancer-test+0x39259f)
    #1 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #2 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #3 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #4 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:493:3 (auto_rebalancer-test+0x369f46)
    #5 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #6 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #7 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #8 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #9 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #10 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #11 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #12 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #13 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #14 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #15 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b480012a800 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M1056229661787465760 is already destroyed.

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

SUMMARY: ThreadSanitizer: data race /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:294:22 in std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::has_value() const
==================
==================
WARNING: ThreadSanitizer: data race (pid=20370)
  Read of size 1 at 0x7b480012a928 by thread T303 (mutexes: read M1056229661787465760):
    #0 __is_long /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1423:39 (libc++.so.1+0xc64d4)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1881:16 (libc++.so.1+0xc64d4)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (libksck.so+0x111ba6)
    #3 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:332:13 (libksck.so+0x111b4c)
    #4 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #5 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #6 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #7 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #8 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #9 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #10 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #11 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #12 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #13 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #15 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #16 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #17 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #18 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #19 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 8 at 0x7b480012a928 by main thread:
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/string:1936:7 (auto_rebalancer-test+0x39280d)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (auto_rebalancer-test+0x392596)
    #3 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #4 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #5 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #6 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:493:3 (auto_rebalancer-test+0x369f46)
    #7 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #8 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #9 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #10 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #11 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #12 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #13 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #14 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #15 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #16 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #17 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b480012a800 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M1056229661787465760 is already destroyed.

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

SUMMARY: ThreadSanitizer: data race /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1423:39 in __is_long
==================
==================
WARNING: ThreadSanitizer: data race (pid=20370)
  Read of size 8 at 0x7b480012a930 by thread T303 (mutexes: read M1056229661787465760):
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string (libc++.so.1+0xc6572)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (libksck.so+0x111ba6)
    #3 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:332:13 (libksck.so+0x111b4c)
    #4 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #5 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #6 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #7 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #8 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #9 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #10 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #11 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #12 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #13 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #15 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #16 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #17 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #18 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #19 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 8 at 0x7b480012a930 by main thread:
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/string:1936:7 (auto_rebalancer-test+0x39280d)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (auto_rebalancer-test+0x392596)
    #3 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #4 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #5 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #6 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #7 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #8 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #9 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #10 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #11 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #12 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #13 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #14 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b480012a800 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M1056229661787465760 is already destroyed.

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

SUMMARY: ThreadSanitizer: data race /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string in std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)
==================
==================
WARNING: ThreadSanitizer: data race (pid=20370)
  Read of size 1 at 0x7b48000749c0 by thread T303 (mutexes: read M82126):
    #0 std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::has_value() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:294:22 (libksck.so+0x11173a)
    #1 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:331:19 (libksck.so+0x111b2d)
    #2 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #3 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #4 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #5 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #6 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #7 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #8 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #9 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #10 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #11 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #12 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #13 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #14 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #15 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #16 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #17 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 1 at 0x7b48000749c0 by main thread:
    #0 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:324:26 (auto_rebalancer-test+0x39259f)
    #1 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #2 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #3 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #4 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #5 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
I20250114 20:58:05.162451 20370 test_util.cc:274] Using random seed: -784307415
    #6 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #7 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #8 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #9 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #10 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #11 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #12 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b4800074880 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M82126 (0x7b48000748a0) created at:
    #0 AnnotateRWLockCreate /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp:254 (auto_rebalancer-test+0x33558e)
    #1 kudu::rw_spinlock::rw_spinlock() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/locks.h:86:5 (libmaster.so+0x355fce)
    #2 kudu::master::TSDescriptor::TSDescriptor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:79:15 (libmaster.so+0x494651)
    #3 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler::make_shared_enabler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:57:53 (libmaster.so+0x49b659)
    #4 std::__1::__compressed_pair_elem<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, 1, false>::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, 0ul>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::__tuple_indices<0ul>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2113:9 (libmaster.so+0x49b599)
    #5 std::__1::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2197:9 (libmaster.so+0x49b284)
    #6 std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> >::__shared_ptr_emplace<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:3470:16 (libmaster.so+0x49ae9e)
    #7 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4291:26 (libmaster.so+0x49ab70)
    #8 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #9 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #10 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #11 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #12 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #13 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #15 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #16 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #17 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #18 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #19 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #20 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #21 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #22 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #23 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #24 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #25 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #26 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #27 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #28 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

SUMMARY: ThreadSanitizer: data race /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:294:22 in std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::has_value() const
==================
==================
WARNING: ThreadSanitizer: data race (pid=20370)
  Read of size 1 at 0x7b48000749a8 by thread T303 (mutexes: read M82126):
    #0 __is_long /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1423:39 (libc++.so.1+0xc64d4)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1881:16 (libc++.so.1+0xc64d4)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (libksck.so+0x111ba6)
    #3 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct_from<std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&>(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:332:13 (libksck.so+0x111b4c)
    #4 std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_base(std::__1::__optional_copy_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:464:15 (libmaster.so+0x2df798)
    #5 std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_base(std::__1::__optional_move_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:490:5 (libmaster.so+0x2df750)
    #6 std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_copy_assign_base(std::__1::__optional_copy_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:522:5 (libmaster.so+0x2df710)
    #7 std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__optional_move_assign_base(std::__1::__optional_move_assign_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false> const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:555:5 (libmaster.so+0x2df6d0)
    #8 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::optional(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:688:41 (libmaster.so+0x2df3c0)
    #9 kudu::master::TSDescriptor::location() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:250:12 (libmaster.so+0x2d2948)
    #10 kudu::master::AutoRebalancerTask::BuildClusterRawInfo(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::rebalance::ClusterRawInfo*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:494:13 (libmaster.so+0x2cb7c0)
    #11 kudu::master::AutoRebalancerTask::RunLoop() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:225:16 (libmaster.so+0x2cadf2)
    #12 kudu::master::AutoRebalancerTask::Init()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:185:42 (libmaster.so+0x2cfe31)
    #13 decltype(std::__1::forward<kudu::master::AutoRebalancerTask::Init()::$_0&>(fp)()) std::__1::__invoke<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x2cfde9)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::AutoRebalancerTask::Init()::$_0&>(kudu::master::AutoRebalancerTask::Init()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x2cfd79)
    #15 std::__1::__function::__alloc_func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x2cfd41)
    #16 std::__1::__function::__func<kudu::master::AutoRebalancerTask::Init()::$_0, std::__1::allocator<kudu::master::AutoRebalancerTask::Init()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x2cf03d)
    #17 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #18 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #19 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Previous write of size 8 at 0x7b48000749a8 by main thread:
    #0 memcpy sanitizer_common/sanitizer_common_interceptors.inc:808 (auto_rebalancer-test+0x2ee6dc)
    #1 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/string:1936:7 (auto_rebalancer-test+0x39280d)
    #2 void std::__1::__optional_storage_base<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false>::__construct<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:323:54 (auto_rebalancer-test+0x392596)
    #3 std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >& std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::operator=<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/optional:790:19 (auto_rebalancer-test+0x39247e)
    #4 kudu::master::TSDescriptor::AssignLocationForTesting(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.h:278:15 (auto_rebalancer-test+0x39034f)
    #5 kudu::master::AutoRebalancerTest::AssignLocationsWithSkew(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:165:17 (auto_rebalancer-test+0x37ebda)
    #6 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #7 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #8 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #9 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #10 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #11 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #12 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #13 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #14 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

  Location is heap block of size 376 at 0x7b4800074880 allocated by thread T342:
    #0 operator new(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_new_delete.cpp:64 (auto_rebalancer-test+0x366217)
    #1 std::__1::__libcpp_allocate(unsigned long, unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/new:253:10 (libmaster.so+0x2c0b76)
    #2 std::__1::allocator<std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> > >::allocate(unsigned long) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:1789:34 (libmaster.so+0x49ace1)
    #3 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4290:45 (libmaster.so+0x49ab19)
    #4 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #5 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #6 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #7 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #8 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #9 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #10 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #11 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #12 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #13 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #14 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #15 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #16 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #17 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #18 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #19 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #20 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #21 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #22 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #23 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #24 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Mutex M82126 (0x7b48000748a0) created at:
    #0 AnnotateRWLockCreate /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp:254 (auto_rebalancer-test+0x33558e)
    #1 kudu::rw_spinlock::rw_spinlock() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/locks.h:86:5 (libmaster.so+0x355fce)
    #2 kudu::master::TSDescriptor::TSDescriptor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:79:15 (libmaster.so+0x494651)
    #3 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler::make_shared_enabler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:57:53 (libmaster.so+0x49b659)
    #4 std::__1::__compressed_pair_elem<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, 1, false>::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, 0ul>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::__tuple_indices<0ul>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2113:9 (libmaster.so+0x49b599)
    #5 std::__1::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>::__compressed_pair<std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:2197:9 (libmaster.so+0x49b284)
    #6 std::__1::__shared_ptr_emplace<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler> >::__shared_ptr_emplace<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::allocator<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:3470:16 (libmaster.so+0x49ae9e)
    #7 std::__1::enable_if<!(is_array<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::value), std::__1::shared_ptr<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&> >::type std::__1::make_shared<std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::make_shared_enabler&&...) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/memory:4291:26 (libmaster.so+0x49ab70)
    #8 std::__1::shared_ptr<kudu::master::TSDescriptor> enable_make_shared<kudu::master::TSDescriptor>::make_shared<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/make_shared.h:61:12 (libmaster.so+0x49691d)
    #9 kudu::master::TSDescriptor::RegisterNew(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_descriptor.cc:72:32 (libmaster.so+0x493f70)
    #10 kudu::master::TSManager::RegisterTS(kudu::NodeInstancePB const&, kudu::ServerRegistrationPB const&, kudu::DnsResolver*, std::__1::shared_ptr<kudu::master::TSDescriptor>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/ts_manager.cc:188:7 (libmaster.so+0x4a8f8b)
    #11 kudu::master::MasterServiceImpl::TSHeartbeat(kudu::master::TSHeartbeatRequestPB const*, kudu::master::TSHeartbeatResponsePB*, kudu::rpc::RpcContext*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master_service.cc:418:39 (libmaster.so+0x431f24)
    #12 kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/build/tsan/src/kudu/master/master.service.cc:585:13 (libmaster_proto.so+0x261fc4)
    #13 decltype(std::__1::forward<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&>(fp)(std::__1::forward<google::protobuf::Message const*>(fp0), std::__1::forward<google::protobuf::Message*>(fp0), std::__1::forward<kudu::rpc::RpcContext*>(fp0))) std::__1::__invoke<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster_proto.so+0x261f52)
    #14 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*>(kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1&, google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster_proto.so+0x261e81)
    #15 std::__1::__function::__alloc_func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster_proto.so+0x261dfc)
    #16 std::__1::__function::__func<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1, std::__1::allocator<kudu::master::MasterServiceIf::MasterServiceIf(scoped_refptr<kudu::MetricEntity> const&, scoped_refptr<kudu::rpc::ResultTracker> const&)::$_1>, void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster_proto.so+0x2610b2)
    #17 std::__1::__function::__value_func<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*&&, google::protobuf::Message*&&, kudu::rpc::RpcContext*&&) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libkrpc.so+0x1f2dfc)
    #18 std::__1::function<void (google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*)>::operator()(google::protobuf::Message const*, google::protobuf::Message*, kudu::rpc::RpcContext*) const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libkrpc.so+0x1f2236)
    #19 kudu::rpc::GeneratedServiceIf::Handle(kudu::rpc::InboundCall*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_if.cc:137:3 (libkrpc.so+0x1f1bdf)
    #20 kudu::rpc::ServicePool::RunThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:229:15 (libkrpc.so+0x1f4ef3)
    #21 kudu::rpc::ServicePool::Init(int)::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f6211)
    #22 decltype(std::__1::forward<kudu::rpc::ServicePool::Init(int)::$_0&>(fp)()) std::__1::__invoke<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkrpc.so+0x1f61c9)
    #23 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::rpc::ServicePool::Init(int)::$_0&>(kudu::rpc::ServicePool::Init(int)::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkrpc.so+0x1f6159)
    #24 std::__1::__function::__alloc_func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkrpc.so+0x1f6121)
    #25 std::__1::__function::__func<kudu::rpc::ServicePool::Init(int)::$_0, std::__1::allocator<kudu::rpc::ServicePool::Init(int)::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkrpc.so+0x1f541d)
    #26 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #27 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #28 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T303 'auto-rebalancer' (tid=23366, running) created by thread T136 at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::master::AutoRebalancerTask::Init() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer.cc:184:10 (libmaster.so+0x2cab02)
    #4 kudu::master::CatalogManager::Init(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:1019:3 (libmaster.so+0x30cb4c)
    #5 kudu::master::Master::InitCatalogManager() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:402:3 (libmaster.so+0x3f4d45)
    #6 kudu::master::Master::InitCatalogManagerTask() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:390:14 (libmaster.so+0x3f4ba3)
    #7 kudu::master::Master::StartAsync()::$_0::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:370:3 (libmaster.so+0x3f9411)
    #8 decltype(std::__1::forward<kudu::master::Master::StartAsync()::$_0&>(fp)()) std::__1::__invoke<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libmaster.so+0x3f93c9)
    #9 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::master::Master::StartAsync()::$_0&>(kudu::master::Master::StartAsync()::$_0&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libmaster.so+0x3f9359)
    #10 std::__1::__function::__alloc_func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libmaster.so+0x3f9321)
    #11 std::__1::__function::__func<kudu::master::Master::StartAsync()::$_0, std::__1::allocator<kudu::master::Master::StartAsync()::$_0>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libmaster.so+0x3f861d)
    #12 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #13 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #14 kudu::ThreadPool::DispatchThread() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:776:7 (libkudu_util.so+0x466866)
    #15 kudu::ThreadPool::CreateThread()::$_2::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/threadpool.cc:849:48 (libkudu_util.so+0x469cc1)
    #16 decltype(std::__1::forward<kudu::ThreadPool::CreateThread()::$_2&>(fp)()) std::__1::__invoke<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/type_traits:3899:1 (libkudu_util.so+0x469c79)
    #17 void std::__1::__invoke_void_return_wrapper<void>::__call<kudu::ThreadPool::CreateThread()::$_2&>(kudu::ThreadPool::CreateThread()::$_2&) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/__functional_base:348:9 (libkudu_util.so+0x469c09)
    #18 std::__1::__function::__alloc_func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1557:16 (libkudu_util.so+0x469bd1)
    #19 std::__1::__function::__func<kudu::ThreadPool::CreateThread()::$_2, std::__1::allocator<kudu::ThreadPool::CreateThread()::$_2>, void ()>::operator()() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1731:12 (libkudu_util.so+0x468ecd)
    #20 std::__1::__function::__value_func<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:1884:16 (libtserver_test_util.so+0x60234)
    #21 std::__1::function<void ()>::operator()() const /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/c++/v1/functional:2556:12 (libtserver_test_util.so+0x60069)
    #22 kudu::Thread::SuperviseThread(void*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:693:3 (libkudu_util.so+0x448fe6)

  Thread T342 'rpc worker-2331' (tid=23318, running) created by main thread at:
    #0 pthread_create /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/llvm-11.0.0.src/projects/compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp:966 (auto_rebalancer-test+0x2ea825)
    #1 kudu::Thread::StartThread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, unsigned long, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:637:15 (libkudu_util.so+0x44884a)
    #2 kudu::Thread::Create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::function<void ()>, scoped_refptr<kudu::Thread>*) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.h:146:12 (libmaster.so+0x2d07f9)
    #3 kudu::rpc::ServicePool::Init(int) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/service_pool.cc:92:5 (libkrpc.so+0x1f3f1f)
    #4 kudu::RpcServer::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/rpc_server.cc:238:3 (libserver_process.so+0x13460f)
    #5 kudu::server::ServerBase::RegisterService(std::__1::unique_ptr<kudu::rpc::ServiceIf, std::__1::default_delete<kudu::rpc::ServiceIf> >) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/server/server_base.cc:1171:23 (libserver_process.so+0x14698c)
    #6 kudu::master::Master::StartAsync() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/master.cc:358:3 (libmaster.so+0x3f3ce7)
    #7 kudu::master::MiniMaster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/mini_master.cc:96:3 (libmaster.so+0x4bdd12)
    #8 kudu::cluster::InternalMiniCluster::StartMasters() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:178:5 (libmini_cluster.so+0xd6ddf)
    #9 kudu::cluster::InternalMiniCluster::Start() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/mini-cluster/internal_mini_cluster.cc:109:3 (libmini_cluster.so+0xd660b)
    #10 kudu::master::AutoRebalancerTest::CreateAndStartCluster(bool) /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:118:22 (auto_rebalancer-test+0x37d2b8)
    #11 kudu::master::AutoRebalancerTest_NoReplicaMovesIfCannotFixPlacementPolicy_Test::TestBody() /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:490:3 (auto_rebalancer-test+0x369d69)
    #12 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x64e2f)
    #13 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x64e2f)
    #14 testing::Test::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2674:5 (libgtest.so.1.12.1+0x42a31)
    #15 testing::TestInfo::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2853:11 (libgtest.so.1.12.1+0x43d48)
    #16 testing::TestSuite::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:3012:30 (libgtest.so.1.12.1+0x44d24)
    #17 testing::internal::UnitTestImpl::RunAllTests() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5870:44 (libgtest.so.1.12.1+0x59814)
    #18 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2599:10 (libgtest.so.1.12.1+0x65cef)
    #19 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:2635:14 (libgtest.so.1.12.1+0x65cef)
    #20 testing::UnitTest::Run() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/src/googletest-release-1.12.1/googletest/src/gtest.cc:5444:10 (libgtest.so.1.12.1+0x58dcc)
    #21 RUN_ALL_TESTS() /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/installed/tsan/include/gtest/gtest.h:2293:73 (auto_rebalancer-test+0x3a8bfb)
    #22 main /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/test_main.cc:109:10 (auto_rebalancer-test+0x3a7afc)

SUMMARY: ThreadSanitizer: data race /home/jenkins-slave/workspace/build_and_test_flaky@2/thirdparty/build/llvm-11.0.0.libcxx.tsan/include/c++/v1/string:1423:39 in __is_long
==================
I20250114 20:58:05.200776 23318 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:58330:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:58:05.202885 23318 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:58:05.253439 23485 tablet_service.cc:1467] Processing CreateTablet for tablet 7bed5c07aa3b40b1af586884e6f9aa82 (DEFAULT_TABLE table=test-workload [id=b170420a5faa405394bfe4de6c185e9f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:58:05.254813 23485 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7bed5c07aa3b40b1af586884e6f9aa82. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:05.262828 23559 tablet_service.cc:1467] Processing CreateTablet for tablet 7bed5c07aa3b40b1af586884e6f9aa82 (DEFAULT_TABLE table=test-workload [id=b170420a5faa405394bfe4de6c185e9f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:58:05.263453 23410 tablet_service.cc:1467] Processing CreateTablet for tablet 7bed5c07aa3b40b1af586884e6f9aa82 (DEFAULT_TABLE table=test-workload [id=b170420a5faa405394bfe4de6c185e9f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:58:05.263921 23559 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7bed5c07aa3b40b1af586884e6f9aa82. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:05.264933 23410 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7bed5c07aa3b40b1af586884e6f9aa82. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:05.273156 23616 tablet_bootstrap.cc:492] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa: Bootstrap starting.
I20250114 20:58:05.279785 23617 tablet_bootstrap.cc:492] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612: Bootstrap starting.
I20250114 20:58:05.280984 23616 tablet_bootstrap.cc:654] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:05.281327 23618 tablet_bootstrap.cc:492] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90: Bootstrap starting.
I20250114 20:58:05.285674 23616 tablet_bootstrap.cc:492] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa: No bootstrap required, opened a new log
I20250114 20:58:05.286074 23616 ts_tablet_manager.cc:1397] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa: Time spent bootstrapping tablet: real 0.013s	user 0.007s	sys 0.004s
I20250114 20:58:05.287402 23617 tablet_bootstrap.cc:654] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:05.288015 23618 tablet_bootstrap.cc:654] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:05.288486 23616 raft_consensus.cc:357] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.289479 23616 raft_consensus.cc:383] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:05.289752 23616 raft_consensus.cc:738] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b8513020d7904d268a210ae3fba4c4aa, State: Initialized, Role: FOLLOWER
I20250114 20:58:05.290601 23616 consensus_queue.cc:260] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.293421 23616 ts_tablet_manager.cc:1428] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.002s
I20250114 20:58:05.296437 23617 tablet_bootstrap.cc:492] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612: No bootstrap required, opened a new log
I20250114 20:58:05.296844 23617 ts_tablet_manager.cc:1397] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612: Time spent bootstrapping tablet: real 0.017s	user 0.010s	sys 0.004s
I20250114 20:58:05.297595 23618 tablet_bootstrap.cc:492] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90: No bootstrap required, opened a new log
I20250114 20:58:05.298028 23618 ts_tablet_manager.cc:1397] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90: Time spent bootstrapping tablet: real 0.017s	user 0.004s	sys 0.010s
I20250114 20:58:05.298964 23617 raft_consensus.cc:357] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.299520 23617 raft_consensus.cc:383] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:05.299773 23617 raft_consensus.cc:738] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a873a31ad7e44a40972b91f340d5f612, State: Initialized, Role: FOLLOWER
I20250114 20:58:05.300338 23617 consensus_queue.cc:260] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.300710 23618 raft_consensus.cc:357] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.301447 23618 raft_consensus.cc:383] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:05.301723 23618 raft_consensus.cc:738] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ada5d4385655434db4a1aac44c4efc90, State: Initialized, Role: FOLLOWER
I20250114 20:58:05.302392 23618 consensus_queue.cc:260] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.305159 23617 ts_tablet_manager.cc:1428] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612: Time spent starting tablet: real 0.008s	user 0.000s	sys 0.007s
I20250114 20:58:05.311826 23618 ts_tablet_manager.cc:1428] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90: Time spent starting tablet: real 0.014s	user 0.011s	sys 0.005s
W20250114 20:58:05.346004 23516 debug-util.cc:398] Leaking SignalData structure 0x7b0800048900 after lost signal to thread 20373
I20250114 20:58:05.435947 23624 raft_consensus.cc:491] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:05.436556 23624 raft_consensus.cc:513] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.439247 23624 leader_election.cc:290] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers b8513020d7904d268a210ae3fba4c4aa (127.19.228.130:39451), a873a31ad7e44a40972b91f340d5f612 (127.19.228.131:34173)
I20250114 20:58:05.453923 23495 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7bed5c07aa3b40b1af586884e6f9aa82" candidate_uuid: "ada5d4385655434db4a1aac44c4efc90" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b8513020d7904d268a210ae3fba4c4aa" is_pre_election: true
I20250114 20:58:05.454876 23495 raft_consensus.cc:2463] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ada5d4385655434db4a1aac44c4efc90 in term 0.
I20250114 20:58:05.456192 23383 leader_election.cc:304] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: ada5d4385655434db4a1aac44c4efc90, b8513020d7904d268a210ae3fba4c4aa; no voters: 
I20250114 20:58:05.456971 23624 raft_consensus.cc:2798] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:05.457317 23624 raft_consensus.cc:491] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:05.457628 23624 raft_consensus.cc:3054] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:05.465279 23569 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7bed5c07aa3b40b1af586884e6f9aa82" candidate_uuid: "ada5d4385655434db4a1aac44c4efc90" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a873a31ad7e44a40972b91f340d5f612" is_pre_election: true
I20250114 20:58:05.466037 23569 raft_consensus.cc:2463] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ada5d4385655434db4a1aac44c4efc90 in term 0.
I20250114 20:58:05.465739 23624 raft_consensus.cc:513] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.468020 23624 leader_election.cc:290] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [CANDIDATE]: Term 1 election: Requested vote from peers b8513020d7904d268a210ae3fba4c4aa (127.19.228.130:39451), a873a31ad7e44a40972b91f340d5f612 (127.19.228.131:34173)
I20250114 20:58:05.468706 23495 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7bed5c07aa3b40b1af586884e6f9aa82" candidate_uuid: "ada5d4385655434db4a1aac44c4efc90" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b8513020d7904d268a210ae3fba4c4aa"
I20250114 20:58:05.469231 23495 raft_consensus.cc:3054] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:05.469413 23569 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7bed5c07aa3b40b1af586884e6f9aa82" candidate_uuid: "ada5d4385655434db4a1aac44c4efc90" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a873a31ad7e44a40972b91f340d5f612"
I20250114 20:58:05.469954 23569 raft_consensus.cc:3054] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:05.474332 23495 raft_consensus.cc:2463] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ada5d4385655434db4a1aac44c4efc90 in term 1.
I20250114 20:58:05.475586 23383 leader_election.cc:304] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: ada5d4385655434db4a1aac44c4efc90, b8513020d7904d268a210ae3fba4c4aa; no voters: 
I20250114 20:58:05.476804 23569 raft_consensus.cc:2463] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ada5d4385655434db4a1aac44c4efc90 in term 1.
I20250114 20:58:05.488488 23624 raft_consensus.cc:2798] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:05.489832 23624 raft_consensus.cc:695] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 1 LEADER]: Becoming Leader. State: Replica: ada5d4385655434db4a1aac44c4efc90, State: Running, Role: LEADER
I20250114 20:58:05.490674 23624 consensus_queue.cc:237] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } }
I20250114 20:58:05.501250 23318 catalog_manager.cc:5526] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 reported cstate change: term changed from 0 to 1, leader changed from <none> to ada5d4385655434db4a1aac44c4efc90 (127.19.228.129). New cstate: current_term: 1 leader_uuid: "ada5d4385655434db4a1aac44c4efc90" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ada5d4385655434db4a1aac44c4efc90" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42393 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:06.012902 23629 consensus_queue.cc:1035] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a873a31ad7e44a40972b91f340d5f612" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 34173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:06.026319 23629 consensus_queue.cc:1035] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b8513020d7904d268a210ae3fba4c4aa" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39451 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:06.152562 23371 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:06.154343 23371 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:06.154624 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:06.186683 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:07.197118 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:08.155428 23371 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:08.156656 23371 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:08.156939 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:08.206616 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:09.213912 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:10.157738 23371 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:10.158950 23371 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:10.159266 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:10.221279 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:11.228536 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:12.160109 23371 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:12.161332 23371 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:12.161612 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:12.236030 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:13.243253 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:14.162400 23371 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:14.163687 23371 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:14.163959 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:14.250382 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:15.257427 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:16.164909 23371 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:16.166247 23371 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:16.166501 23371 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:16.263715 23366 placement_policy_util.cc:407] tablet 7bed5c07aa3b40b1af586884e6f9aa82: detected majority of replicas (2 of 3) at location L0
I20250114 20:58:16.603113 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:58:16.626514 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:16.627182 20370 tablet_replica.cc:331] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90: stopping tablet replica
I20250114 20:58:16.627784 20370 raft_consensus.cc:2238] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:16.628751 20370 raft_consensus.cc:2267] T 7bed5c07aa3b40b1af586884e6f9aa82 P ada5d4385655434db4a1aac44c4efc90 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:16.648298 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:58:16.659799 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:58:16.678306 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:16.678948 20370 tablet_replica.cc:331] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa: stopping tablet replica
I20250114 20:58:16.679445 20370 raft_consensus.cc:2238] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:16.679936 20370 raft_consensus.cc:2267] T 7bed5c07aa3b40b1af586884e6f9aa82 P b8513020d7904d268a210ae3fba4c4aa [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:16.714836 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:58:16.724992 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:58:16.742727 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:16.743335 20370 tablet_replica.cc:331] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612: stopping tablet replica
I20250114 20:58:16.743878 20370 raft_consensus.cc:2238] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:16.744304 20370 raft_consensus.cc:2267] T 7bed5c07aa3b40b1af586884e6f9aa82 P a873a31ad7e44a40972b91f340d5f612 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:16.762290 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:58:16.772475 20370 master.cc:537] Master@127.19.228.190:46741 shutting down...
I20250114 20:58:16.787492 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:16.788151 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:16.788439 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 5a1cb7e676e04c1fb80029ff78da9c39: stopping tablet replica
I20250114 20:58:16.805753 20370 master.cc:559] Master@127.19.228.190:46741 shutdown complete.
[       OK ] AutoRebalancerTest.NoReplicaMovesIfCannotFixPlacementPolicy (15181 ms)
[ RUN      ] AutoRebalancerTest.TestMaxMovesPerServer
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/auto_rebalancer-test.cc:516: Skipped
test is skipped; set KUDU_ALLOW_SLOW_TESTS=1 to run
[  SKIPPED ] AutoRebalancerTest.TestMaxMovesPerServer (6 ms)
[ RUN      ] AutoRebalancerTest.AutoRebalancingUnstableCluster
I20250114 20:58:16.842257 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:37239
I20250114 20:58:16.843245 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:16.848474 23642 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:16.849113 23643 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:16.850003 23645 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:16.850564 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:16.851294 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:16.851478 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:16.851630 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888296851615 us; error 0 us; skew 500 ppm
I20250114 20:58:16.852053 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:16.854290 20370 webserver.cc:458] Webserver started at http://127.19.228.190:40821/ using document root <none> and password file <none>
I20250114 20:58:16.854699 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:16.854840 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:16.855057 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:16.856204 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "c5f8655384374b88ae9f5b1f2cd337e7"
format_stamp: "Formatted at 2025-01-14 20:58:16 on dist-test-slave-kc3q"
I20250114 20:58:16.860255 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.002s	sys 0.002s
I20250114 20:58:16.863171 23650 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:16.863986 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:16.864269 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "c5f8655384374b88ae9f5b1f2cd337e7"
format_stamp: "Formatted at 2025-01-14 20:58:16 on dist-test-slave-kc3q"
I20250114 20:58:16.864524 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:16.905722 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:16.906878 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:16.941356 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:37239
I20250114 20:58:16.941442 23701 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:37239 every 8 connection(s)
I20250114 20:58:16.945004 23702 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:16.955216 23702 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7: Bootstrap starting.
I20250114 20:58:16.959466 23702 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:16.963285 23702 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7: No bootstrap required, opened a new log
I20250114 20:58:16.965246 23702 raft_consensus.cc:357] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" member_type: VOTER }
I20250114 20:58:16.965727 23702 raft_consensus.cc:383] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:16.965924 23702 raft_consensus.cc:738] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c5f8655384374b88ae9f5b1f2cd337e7, State: Initialized, Role: FOLLOWER
I20250114 20:58:16.966460 23702 consensus_queue.cc:260] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" member_type: VOTER }
I20250114 20:58:16.966948 23702 raft_consensus.cc:397] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:58:16.967165 23702 raft_consensus.cc:491] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:58:16.967406 23702 raft_consensus.cc:3054] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:16.971707 23702 raft_consensus.cc:513] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" member_type: VOTER }
I20250114 20:58:16.972226 23702 leader_election.cc:304] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: c5f8655384374b88ae9f5b1f2cd337e7; no voters: 
I20250114 20:58:16.973222 23702 leader_election.cc:290] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:58:16.973486 23705 raft_consensus.cc:2798] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:16.974664 23705 raft_consensus.cc:695] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 1 LEADER]: Becoming Leader. State: Replica: c5f8655384374b88ae9f5b1f2cd337e7, State: Running, Role: LEADER
I20250114 20:58:16.975294 23705 consensus_queue.cc:237] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" member_type: VOTER }
I20250114 20:58:16.975996 23702 sys_catalog.cc:564] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:58:16.979974 23707 sys_catalog.cc:455] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader c5f8655384374b88ae9f5b1f2cd337e7. Latest consensus state: current_term: 1 leader_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" member_type: VOTER } }
I20250114 20:58:16.980036 23706 sys_catalog.cc:455] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "c5f8655384374b88ae9f5b1f2cd337e7" member_type: VOTER } }
I20250114 20:58:16.980902 23707 sys_catalog.cc:458] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:16.980952 23706 sys_catalog.cc:458] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:16.983618 23712 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:58:16.988189 23712 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:58:16.991276 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:58:16.996397 23712 catalog_manager.cc:1348] Generated new cluster ID: f9a30148ce1e466eba241c6d79a2b16c
I20250114 20:58:16.996675 23712 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:58:17.018404 23712 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:58:17.019652 23712 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:58:17.029551 23712 catalog_manager.cc:5899] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7: Generated new TSK 0
I20250114 20:58:17.030110 23712 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:58:17.057816 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:17.063697 23723 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:17.064695 23724 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:17.065676 23726 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:17.066218 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:17.067019 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:17.067203 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:17.067353 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888297067336 us; error 0 us; skew 500 ppm
I20250114 20:58:17.067847 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:17.070015 20370 webserver.cc:458] Webserver started at http://127.19.228.129:46091/ using document root <none> and password file <none>
I20250114 20:58:17.070439 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:17.070606 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:17.070844 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:17.071929 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "e67826202a504b209158ff0057d8fb38"
format_stamp: "Formatted at 2025-01-14 20:58:17 on dist-test-slave-kc3q"
I20250114 20:58:17.076145 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:58:17.079237 23731 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.079979 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:58:17.080228 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "e67826202a504b209158ff0057d8fb38"
format_stamp: "Formatted at 2025-01-14 20:58:17 on dist-test-slave-kc3q"
I20250114 20:58:17.080488 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:17.100299 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:17.101397 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:17.102759 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:17.105005 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:17.105198 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.105406 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:17.105547 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.141714 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:41173
I20250114 20:58:17.141795 23793 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:41173 every 8 connection(s)
I20250114 20:58:17.145990 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:17.154294 23799 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:17.154834 23798 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:17.156409 20370 server_base.cc:1034] running on GCE node
W20250114 20:58:17.156950 23801 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:17.157729 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:17.157935 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:17.157970 23794 heartbeater.cc:346] Connected to a master server at 127.19.228.190:37239
I20250114 20:58:17.158140 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888297158122 us; error 0 us; skew 500 ppm
I20250114 20:58:17.158377 23794 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:17.158864 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:17.159214 23794 heartbeater.cc:510] Master 127.19.228.190:37239 requested a full tablet report, sending...
I20250114 20:58:17.161391 23667 ts_manager.cc:194] Registered new tserver with Master: e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173)
I20250114 20:58:17.161584 20370 webserver.cc:458] Webserver started at http://127.19.228.130:46813/ using document root <none> and password file <none>
I20250114 20:58:17.162081 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:17.162252 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:17.162494 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:17.163197 23667 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:49216
I20250114 20:58:17.163614 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "c8c4066502a34b6d81d38fa171dc8224"
format_stamp: "Formatted at 2025-01-14 20:58:17 on dist-test-slave-kc3q"
I20250114 20:58:17.167846 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.001s	sys 0.002s
I20250114 20:58:17.170729 23806 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.171370 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:17.171646 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "c8c4066502a34b6d81d38fa171dc8224"
format_stamp: "Formatted at 2025-01-14 20:58:17 on dist-test-slave-kc3q"
I20250114 20:58:17.171897 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:17.185226 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:17.186308 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:17.187609 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:17.189675 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:17.189857 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.190063 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:17.190204 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.227150 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:44507
I20250114 20:58:17.227242 23868 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:44507 every 8 connection(s)
I20250114 20:58:17.231505 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:17.237921 23872 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:17.239488 23873 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:17.240986 23875 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:17.242411 23869 heartbeater.cc:346] Connected to a master server at 127.19.228.190:37239
I20250114 20:58:17.242715 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:17.242723 23869 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:17.243628 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
I20250114 20:58:17.243721 23869 heartbeater.cc:510] Master 127.19.228.190:37239 requested a full tablet report, sending...
W20250114 20:58:17.243840 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:17.244071 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888297244054 us; error 0 us; skew 500 ppm
I20250114 20:58:17.244670 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:17.245697 23667 ts_manager.cc:194] Registered new tserver with Master: c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130:44507)
I20250114 20:58:17.247270 20370 webserver.cc:458] Webserver started at http://127.19.228.131:35857/ using document root <none> and password file <none>
I20250114 20:58:17.247439 23667 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:49218
I20250114 20:58:17.247934 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:17.248183 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:17.248471 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:17.249614 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "35bf8c4277f64c7aad4248f75e13ced6"
format_stamp: "Formatted at 2025-01-14 20:58:17 on dist-test-slave-kc3q"
I20250114 20:58:17.254001 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.000s	sys 0.006s
I20250114 20:58:17.256984 23880 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.257664 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.000s
I20250114 20:58:17.257907 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "35bf8c4277f64c7aad4248f75e13ced6"
format_stamp: "Formatted at 2025-01-14 20:58:17 on dist-test-slave-kc3q"
I20250114 20:58:17.258177 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:17.287305 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:17.288416 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:17.289767 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:17.291869 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:17.292054 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.292260 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:17.292402 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:17.327831 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:44951
I20250114 20:58:17.327916 23942 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:44951 every 8 connection(s)
I20250114 20:58:17.341785 23943 heartbeater.cc:346] Connected to a master server at 127.19.228.190:37239
I20250114 20:58:17.342147 23943 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:17.342798 23943 heartbeater.cc:510] Master 127.19.228.190:37239 requested a full tablet report, sending...
I20250114 20:58:17.344561 23667 ts_manager.cc:194] Registered new tserver with Master: 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:17.345574 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.013345052s
I20250114 20:58:17.345952 23667 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:49228
I20250114 20:58:18.165465 23794 heartbeater.cc:502] Master 127.19.228.190:37239 was elected leader, sending a full tablet report...
I20250114 20:58:18.249783 23869 heartbeater.cc:502] Master 127.19.228.190:37239 was elected leader, sending a full tablet report...
I20250114 20:58:18.348618 23943 heartbeater.cc:502] Master 127.19.228.190:37239 was elected leader, sending a full tablet report...
I20250114 20:58:18.377261 20370 test_util.cc:274] Using random seed: -771092458
I20250114 20:58:18.397512 23667 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:49242:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\252\252\252*\004\001\000TUUU""\004\001\000\252\252\252*\004\001\000TUUU"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:58:18.399608 23667 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:58:18.446585 23759 tablet_service.cc:1467] Processing CreateTablet for tablet 9848d7b9c87a4afc8f1310c6874f0ceb (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION VALUES < 715827882
I20250114 20:58:18.447962 23759 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9848d7b9c87a4afc8f1310c6874f0ceb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.452634 23758 tablet_service.cc:1467] Processing CreateTablet for tablet d951ccf53e8e4e97ba4d353fc8a6e577 (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION 715827882 <= VALUES < 1431655764
I20250114 20:58:18.454005 23758 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d951ccf53e8e4e97ba4d353fc8a6e577. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.455909 23757 tablet_service.cc:1467] Processing CreateTablet for tablet f021c853c1ae4e989f8195d92a59bd8b (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION 1431655764 <= VALUES
I20250114 20:58:18.457207 23757 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f021c853c1ae4e989f8195d92a59bd8b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.475910 23834 tablet_service.cc:1467] Processing CreateTablet for tablet 9848d7b9c87a4afc8f1310c6874f0ceb (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION VALUES < 715827882
I20250114 20:58:18.477331 23834 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9848d7b9c87a4afc8f1310c6874f0ceb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.481678 23963 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: Bootstrap starting.
I20250114 20:58:18.483691 23833 tablet_service.cc:1467] Processing CreateTablet for tablet d951ccf53e8e4e97ba4d353fc8a6e577 (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION 715827882 <= VALUES < 1431655764
I20250114 20:58:18.484974 23833 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d951ccf53e8e4e97ba4d353fc8a6e577. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.487649 23963 tablet_bootstrap.cc:654] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.498019 23832 tablet_service.cc:1467] Processing CreateTablet for tablet f021c853c1ae4e989f8195d92a59bd8b (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION 1431655764 <= VALUES
I20250114 20:58:18.499257 23832 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f021c853c1ae4e989f8195d92a59bd8b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.500134 23963 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: No bootstrap required, opened a new log
I20250114 20:58:18.500612 23963 ts_tablet_manager.cc:1397] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: Time spent bootstrapping tablet: real 0.019s	user 0.008s	sys 0.008s
I20250114 20:58:18.503451 23963 raft_consensus.cc:357] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.504323 23963 raft_consensus.cc:383] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.504639 23963 raft_consensus.cc:738] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e67826202a504b209158ff0057d8fb38, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.505363 23963 consensus_queue.cc:260] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.510850 23965 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: Bootstrap starting.
I20250114 20:58:18.512679 23907 tablet_service.cc:1467] Processing CreateTablet for tablet d951ccf53e8e4e97ba4d353fc8a6e577 (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION 715827882 <= VALUES < 1431655764
I20250114 20:58:18.513911 23907 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d951ccf53e8e4e97ba4d353fc8a6e577. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.514225 23906 tablet_service.cc:1467] Processing CreateTablet for tablet f021c853c1ae4e989f8195d92a59bd8b (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION 1431655764 <= VALUES
I20250114 20:58:18.515352 23906 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f021c853c1ae4e989f8195d92a59bd8b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.519143 23965 tablet_bootstrap.cc:654] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.511286 23908 tablet_service.cc:1467] Processing CreateTablet for tablet 9848d7b9c87a4afc8f1310c6874f0ceb (DEFAULT_TABLE table=test-workload [id=90f4338c9b884eb09e81c814b7efa307]), partition=RANGE (key) PARTITION VALUES < 715827882
I20250114 20:58:18.524221 23908 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9848d7b9c87a4afc8f1310c6874f0ceb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:18.533272 23963 ts_tablet_manager.cc:1428] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: Time spent starting tablet: real 0.032s	user 0.016s	sys 0.010s
I20250114 20:58:18.534312 23963 tablet_bootstrap.cc:492] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38: Bootstrap starting.
I20250114 20:58:18.535665 23968 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: Bootstrap starting.
I20250114 20:58:18.540211 23963 tablet_bootstrap.cc:654] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.541338 23965 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: No bootstrap required, opened a new log
I20250114 20:58:18.541823 23965 ts_tablet_manager.cc:1397] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: Time spent bootstrapping tablet: real 0.031s	user 0.010s	sys 0.005s
I20250114 20:58:18.544574 23965 raft_consensus.cc:357] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.545133 23968 tablet_bootstrap.cc:654] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.545387 23965 raft_consensus.cc:383] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.547128 23965 raft_consensus.cc:738] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.548002 23965 consensus_queue.cc:260] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.549633 23963 tablet_bootstrap.cc:492] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38: No bootstrap required, opened a new log
I20250114 20:58:18.550195 23963 ts_tablet_manager.cc:1397] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38: Time spent bootstrapping tablet: real 0.016s	user 0.010s	sys 0.005s
I20250114 20:58:18.553273 23963 raft_consensus.cc:357] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.553975 23963 raft_consensus.cc:383] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.554279 23963 raft_consensus.cc:738] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e67826202a504b209158ff0057d8fb38, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.555362 23968 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: No bootstrap required, opened a new log
I20250114 20:58:18.555909 23968 ts_tablet_manager.cc:1397] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: Time spent bootstrapping tablet: real 0.021s	user 0.004s	sys 0.011s
I20250114 20:58:18.555518 23963 consensus_queue.cc:260] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.553985 23965 ts_tablet_manager.cc:1428] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: Time spent starting tablet: real 0.012s	user 0.003s	sys 0.006s
I20250114 20:58:18.558266 23965 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: Bootstrap starting.
I20250114 20:58:18.558461 23968 raft_consensus.cc:357] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.559207 23968 raft_consensus.cc:383] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.559473 23968 raft_consensus.cc:738] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 35bf8c4277f64c7aad4248f75e13ced6, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.560283 23968 consensus_queue.cc:260] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.563822 23963 ts_tablet_manager.cc:1428] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38: Time spent starting tablet: real 0.013s	user 0.003s	sys 0.003s
I20250114 20:58:18.564811 23963 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38: Bootstrap starting.
I20250114 20:58:18.566426 23965 tablet_bootstrap.cc:654] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.571584 23963 tablet_bootstrap.cc:654] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.573570 23968 ts_tablet_manager.cc:1428] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: Time spent starting tablet: real 0.017s	user 0.001s	sys 0.014s
I20250114 20:58:18.574556 23968 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6: Bootstrap starting.
I20250114 20:58:18.574769 23965 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: No bootstrap required, opened a new log
I20250114 20:58:18.575279 23965 ts_tablet_manager.cc:1397] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: Time spent bootstrapping tablet: real 0.017s	user 0.001s	sys 0.009s
I20250114 20:58:18.578048 23965 raft_consensus.cc:357] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.579380 23971 raft_consensus.cc:491] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:18.578795 23965 raft_consensus.cc:383] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.579833 23965 raft_consensus.cc:738] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.579867 23971 raft_consensus.cc:513] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.580653 23965 consensus_queue.cc:260] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.581277 23968 tablet_bootstrap.cc:654] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.582473 23971 leader_election.cc:290] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.588866 23965 ts_tablet_manager.cc:1428] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: Time spent starting tablet: real 0.013s	user 0.010s	sys 0.002s
I20250114 20:58:18.590668 23963 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38: No bootstrap required, opened a new log
I20250114 20:58:18.595818 23963 ts_tablet_manager.cc:1397] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38: Time spent bootstrapping tablet: real 0.031s	user 0.013s	sys 0.002s
I20250114 20:58:18.596091 23975 raft_consensus.cc:491] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:18.596518 23975 raft_consensus.cc:513] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.597090 23965 tablet_bootstrap.cc:492] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: Bootstrap starting.
I20250114 20:58:18.598510 23975 leader_election.cc:290] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.598685 23963 raft_consensus.cc:357] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.605952 23963 raft_consensus.cc:383] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.606303 23963 raft_consensus.cc:738] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e67826202a504b209158ff0057d8fb38, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.607146 23963 consensus_queue.cc:260] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.604157 23965 tablet_bootstrap.cc:654] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.609560 23963 ts_tablet_manager.cc:1428] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38: Time spent starting tablet: real 0.013s	user 0.005s	sys 0.000s
I20250114 20:58:18.613574 23968 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6: No bootstrap required, opened a new log
I20250114 20:58:18.614032 23968 ts_tablet_manager.cc:1397] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6: Time spent bootstrapping tablet: real 0.040s	user 0.008s	sys 0.022s
I20250114 20:58:18.616667 23968 raft_consensus.cc:357] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.617412 23968 raft_consensus.cc:383] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.617722 23968 raft_consensus.cc:738] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 35bf8c4277f64c7aad4248f75e13ced6, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.618485 23968 consensus_queue.cc:260] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.620675 23968 ts_tablet_manager.cc:1428] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6: Time spent starting tablet: real 0.006s	user 0.005s	sys 0.002s
I20250114 20:58:18.621569 23968 tablet_bootstrap.cc:492] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6: Bootstrap starting.
I20250114 20:58:18.623095 23917 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "d951ccf53e8e4e97ba4d353fc8a6e577" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6" is_pre_election: true
I20250114 20:58:18.623920 23917 raft_consensus.cc:2463] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 0.
I20250114 20:58:18.625264 23769 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e67826202a504b209158ff0057d8fb38" is_pre_election: true
I20250114 20:58:18.625471 23808 leader_election.cc:304] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, c8c4066502a34b6d81d38fa171dc8224; no voters: 
I20250114 20:58:18.625768 23768 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "d951ccf53e8e4e97ba4d353fc8a6e577" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e67826202a504b209158ff0057d8fb38" is_pre_election: true
I20250114 20:58:18.625960 23769 raft_consensus.cc:2463] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 0.
I20250114 20:58:18.626359 23975 raft_consensus.cc:2798] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:18.626399 23768 raft_consensus.cc:2463] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 0.
I20250114 20:58:18.626837 23975 raft_consensus.cc:491] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:18.627509 23809 leader_election.cc:304] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: c8c4066502a34b6d81d38fa171dc8224, e67826202a504b209158ff0057d8fb38; no voters: 
I20250114 20:58:18.628407 23971 raft_consensus.cc:2798] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:18.628747 23971 raft_consensus.cc:491] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:18.629053 23971 raft_consensus.cc:3054] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.629945 23975 raft_consensus.cc:3054] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.622376 23918 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6" is_pre_election: true
I20250114 20:58:18.632036 23918 raft_consensus.cc:2463] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 0.
I20250114 20:58:18.633841 23968 tablet_bootstrap.cc:654] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:18.634577 23965 tablet_bootstrap.cc:492] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: No bootstrap required, opened a new log
I20250114 20:58:18.635206 23965 ts_tablet_manager.cc:1397] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: Time spent bootstrapping tablet: real 0.038s	user 0.014s	sys 0.012s
I20250114 20:58:18.637485 23971 raft_consensus.cc:513] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.637887 23965 raft_consensus.cc:357] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.638024 23975 raft_consensus.cc:513] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.638707 23965 raft_consensus.cc:383] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.638996 23965 raft_consensus.cc:738] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.639910 23975 leader_election.cc:290] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 election: Requested vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.639679 23965 consensus_queue.cc:260] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.641033 23768 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e67826202a504b209158ff0057d8fb38"
I20250114 20:58:18.641650 23768 raft_consensus.cc:3054] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.641794 23965 ts_tablet_manager.cc:1428] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:58:18.642311 23769 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "d951ccf53e8e4e97ba4d353fc8a6e577" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e67826202a504b209158ff0057d8fb38"
I20250114 20:58:18.642477 23918 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "d951ccf53e8e4e97ba4d353fc8a6e577" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6"
I20250114 20:58:18.642889 23769 raft_consensus.cc:3054] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.645408 23968 tablet_bootstrap.cc:492] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6: No bootstrap required, opened a new log
I20250114 20:58:18.645726 23968 ts_tablet_manager.cc:1397] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6: Time spent bootstrapping tablet: real 0.024s	user 0.013s	sys 0.003s
I20250114 20:58:18.642896 23918 raft_consensus.cc:3054] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.648362 23968 raft_consensus.cc:357] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.649098 23968 raft_consensus.cc:383] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:18.649400 23968 raft_consensus.cc:738] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 35bf8c4277f64c7aad4248f75e13ced6, State: Initialized, Role: FOLLOWER
I20250114 20:58:18.650022 23968 consensus_queue.cc:260] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.651130 23966 raft_consensus.cc:491] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:18.651974 23968 ts_tablet_manager.cc:1428] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6: Time spent starting tablet: real 0.006s	user 0.006s	sys 0.001s
I20250114 20:58:18.651580 23966 raft_consensus.cc:513] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.653991 23966 leader_election.cc:290] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130:44507), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.657886 23769 raft_consensus.cc:2463] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 1.
I20250114 20:58:18.663506 23917 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6"
I20250114 20:58:18.664186 23917 raft_consensus.cc:3054] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.665347 23768 raft_consensus.cc:2463] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 1.
I20250114 20:58:18.667861 23975 raft_consensus.cc:491] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:18.668218 23809 leader_election.cc:304] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: c8c4066502a34b6d81d38fa171dc8224, e67826202a504b209158ff0057d8fb38; no voters: 
I20250114 20:58:18.668293 23975 raft_consensus.cc:513] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.670634 23975 leader_election.cc:290] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.671110 23975 raft_consensus.cc:2798] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:18.671511 23975 raft_consensus.cc:695] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Becoming Leader. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Running, Role: LEADER
I20250114 20:58:18.672317 23975 consensus_queue.cc:237] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.672827 23916 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6" is_pre_election: true
I20250114 20:58:18.673450 23916 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 0.
I20250114 20:58:18.674561 23808 leader_election.cc:304] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, c8c4066502a34b6d81d38fa171dc8224; no voters: 
I20250114 20:58:18.677917 23917 raft_consensus.cc:2463] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 1.
I20250114 20:58:18.661257 23918 raft_consensus.cc:2463] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 1.
I20250114 20:58:18.681934 23971 leader_election.cc:290] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 election: Requested vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.683580 23983 raft_consensus.cc:2798] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:18.683983 23983 raft_consensus.cc:491] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:18.684288 23983 raft_consensus.cc:3054] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.689208 23808 leader_election.cc:304] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, c8c4066502a34b6d81d38fa171dc8224; no voters: 
I20250114 20:58:18.690397 23984 raft_consensus.cc:2798] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:18.690928 23984 raft_consensus.cc:695] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Becoming Leader. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Running, Role: LEADER
I20250114 20:58:18.691082 23844 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "e67826202a504b209158ff0057d8fb38" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "c8c4066502a34b6d81d38fa171dc8224" is_pre_election: true
I20250114 20:58:18.691936 23984 consensus_queue.cc:237] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.692414 23768 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e67826202a504b209158ff0057d8fb38" is_pre_election: true
I20250114 20:58:18.691108 23983 raft_consensus.cc:513] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.693248 23768 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 0.
I20250114 20:58:18.693851 23844 raft_consensus.cc:2388] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate e67826202a504b209158ff0057d8fb38 in current term 1: Already voted for candidate c8c4066502a34b6d81d38fa171dc8224 in this term.
I20250114 20:58:18.703894 23768 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e67826202a504b209158ff0057d8fb38"
I20250114 20:58:18.704602 23768 raft_consensus.cc:3054] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.704707 23918 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "e67826202a504b209158ff0057d8fb38" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6" is_pre_election: true
I20250114 20:58:18.705382 23918 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e67826202a504b209158ff0057d8fb38 in term 0.
I20250114 20:58:18.706324 23983 leader_election.cc:290] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 election: Requested vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:18.705971 23667 catalog_manager.cc:5526] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 reported cstate change: term changed from 0 to 1, leader changed from <none> to c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "c8c4066502a34b6d81d38fa171dc8224" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:18.705998 23917 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "c8c4066502a34b6d81d38fa171dc8224" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6"
I20250114 20:58:18.708701 23917 raft_consensus.cc:3054] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:18.710999 23733 leader_election.cc:304] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, e67826202a504b209158ff0057d8fb38; no voters: c8c4066502a34b6d81d38fa171dc8224
I20250114 20:58:18.713861 23917 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 1.
I20250114 20:58:18.714828 23808 leader_election.cc:304] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, c8c4066502a34b6d81d38fa171dc8224; no voters: 
I20250114 20:58:18.715669 23983 raft_consensus.cc:2798] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:18.715842 23768 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate c8c4066502a34b6d81d38fa171dc8224 in term 1.
I20250114 20:58:18.716245 23983 raft_consensus.cc:695] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Becoming Leader. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Running, Role: LEADER
I20250114 20:58:18.716468 23966 raft_consensus.cc:2758] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Leader pre-election decision vote started in defunct term 0: won
I20250114 20:58:18.717131 23983 consensus_queue.cc:237] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:18.725714 23666 catalog_manager.cc:5526] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 reported cstate change: term changed from 0 to 1, leader changed from <none> to c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "c8c4066502a34b6d81d38fa171dc8224" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:18.735924 23666 catalog_manager.cc:5526] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 reported cstate change: term changed from 0 to 1, leader changed from <none> to c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "c8c4066502a34b6d81d38fa171dc8224" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:18.773013 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:18.779245 23988 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:18.780045 23989 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:18.780937 20370 server_base.cc:1034] running on GCE node
W20250114 20:58:18.781708 23991 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:18.782524 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:18.782739 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:18.782898 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888298782881 us; error 0 us; skew 500 ppm
I20250114 20:58:18.783375 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:18.785553 20370 webserver.cc:458] Webserver started at http://127.19.228.132:38733/ using document root <none> and password file <none>
I20250114 20:58:18.786010 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:18.786185 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:18.786409 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:18.787390 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "53a9d98d64884fd48f7a32d552a06c38"
format_stamp: "Formatted at 2025-01-14 20:58:18 on dist-test-slave-kc3q"
I20250114 20:58:18.791487 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.003s	sys 0.001s
I20250114 20:58:18.794341 23996 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:18.795022 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:58:18.795279 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "53a9d98d64884fd48f7a32d552a06c38"
format_stamp: "Formatted at 2025-01-14 20:58:18 on dist-test-slave-kc3q"
I20250114 20:58:18.795569 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:18.810525 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:18.811656 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:18.813037 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:18.815079 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:18.815263 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:18.815469 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:18.815647 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.001s	sys 0.000s
I20250114 20:58:18.851485 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:39323
I20250114 20:58:18.851603 24058 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:39323 every 8 connection(s)
I20250114 20:58:18.863737 24059 heartbeater.cc:346] Connected to a master server at 127.19.228.190:37239
I20250114 20:58:18.864055 24059 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:18.864691 24059 heartbeater.cc:510] Master 127.19.228.190:37239 requested a full tablet report, sending...
I20250114 20:58:18.866324 23666 ts_manager.cc:194] Registered new tserver with Master: 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132:39323)
I20250114 20:58:18.867627 23666 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:49244
I20250114 20:58:18.914793 23983 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250114 20:58:18.920706 23984 fault_injection.cc:43] FAULT INJECTION ENABLED!
W20250114 20:58:18.921106 23984 fault_injection.cc:44] THIS SERVER MAY CRASH!
I20250114 20:58:18.991243 23720 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:19.006618 23975 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:19.006948 23843 tablet_service.cc:1967] Received LeaderStepDown RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb"
dest_uuid: "c8c4066502a34b6d81d38fa171dc8224"
mode: GRACEFUL
new_leader_uuid: "e67826202a504b209158ff0057d8fb38"
 from {username='slave'} at 127.0.0.1:38518
I20250114 20:58:19.007696 23843 raft_consensus.cc:604] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Received request to transfer leadership to TS e67826202a504b209158ff0057d8fb38
I20250114 20:58:19.007651 23844 consensus_queue.cc:237] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } }
I20250114 20:58:19.010596 23843 tablet_service.cc:1967] Received LeaderStepDown RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b"
dest_uuid: "c8c4066502a34b6d81d38fa171dc8224"
mode: GRACEFUL
new_leader_uuid: "35bf8c4277f64c7aad4248f75e13ced6"
 from {username='slave'} at 127.0.0.1:38518
I20250114 20:58:19.011219 23843 raft_consensus.cc:604] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Received request to transfer leadership to TS 35bf8c4277f64c7aad4248f75e13ced6
I20250114 20:58:19.012694 23975 consensus_queue.cc:1035] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:19.014384 23720 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 2
I20250114 20:58:19.014858 23720 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:19.016109 23768 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:58:19.017709 23971 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:58:19.022361 23918 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:58:19.024889 23983 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:19.027439 23983 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:58:19.033830 23768 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:19.047513 23916 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
W20250114 20:58:19.058203 23807 consensus_peers.cc:487] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 -> Peer 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132:39323): Couldn't send request to peer 53a9d98d64884fd48f7a32d552a06c38. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: d951ccf53e8e4e97ba4d353fc8a6e577. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:19.059922 23666 catalog_manager.cc:5526] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 reported cstate change: config changed from index -1 to 2, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "c8c4066502a34b6d81d38fa171dc8224" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:19.069411 23983 consensus_queue.cc:1035] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:19.288708 24077 ts_tablet_manager.cc:927] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: Initiating tablet copy from peer c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130:44507)
I20250114 20:58:19.289896 24077 tablet_copy_client.cc:323] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:44507
I20250114 20:58:19.297127 23854 tablet_copy_service.cc:140] P c8c4066502a34b6d81d38fa171dc8224: Received BeginTabletCopySession request for tablet d951ccf53e8e4e97ba4d353fc8a6e577 from peer 53a9d98d64884fd48f7a32d552a06c38 ({username='slave'} at 127.0.0.1:38526)
I20250114 20:58:19.297565 23854 tablet_copy_service.cc:161] P c8c4066502a34b6d81d38fa171dc8224: Beginning new tablet copy session on tablet d951ccf53e8e4e97ba4d353fc8a6e577 from peer 53a9d98d64884fd48f7a32d552a06c38 at {username='slave'} at 127.0.0.1:38526: session id = 53a9d98d64884fd48f7a32d552a06c38-d951ccf53e8e4e97ba4d353fc8a6e577
I20250114 20:58:19.302546 23854 tablet_copy_source_session.cc:215] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:58:19.305045 24077 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet d951ccf53e8e4e97ba4d353fc8a6e577. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:19.315685 24077 tablet_copy_client.cc:806] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Starting download of 0 data blocks...
I20250114 20:58:19.316087 24077 tablet_copy_client.cc:670] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Starting download of 1 WAL segments...
I20250114 20:58:19.318722 24077 tablet_copy_client.cc:538] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:58:19.323623 24077 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: Bootstrap starting.
I20250114 20:58:19.332298 24068 raft_consensus.cc:988] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: : Instructing follower 35bf8c4277f64c7aad4248f75e13ced6 to start an election
I20250114 20:58:19.332804 24068 raft_consensus.cc:1076] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Signalling peer 35bf8c4277f64c7aad4248f75e13ced6 to start an election
I20250114 20:58:19.334513 23918 tablet_service.cc:1939] Received Run Leader Election RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b"
dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6"
 from {username='slave'} at 127.0.0.1:52666
I20250114 20:58:19.335127 23918 raft_consensus.cc:491] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250114 20:58:19.335522 23918 raft_consensus.cc:3054] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:19.338411 24077 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=1 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20250114 20:58:19.339090 24077 tablet_bootstrap.cc:492] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: Bootstrap complete.
I20250114 20:58:19.339660 24077 ts_tablet_manager.cc:1397] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: Time spent bootstrapping tablet: real 0.016s	user 0.012s	sys 0.008s
I20250114 20:58:19.340829 23918 raft_consensus.cc:513] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.342025 24077 raft_consensus.cc:357] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 NON_PARTICIPANT]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.342906 23918 leader_election.cc:290] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [CANDIDATE]: Term 2 election: Requested vote from peers e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173), c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130:44507)
I20250114 20:58:19.343221 24077 raft_consensus.cc:738] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 53a9d98d64884fd48f7a32d552a06c38, State: Initialized, Role: LEARNER
I20250114 20:58:19.343904 24077 consensus_queue.cc:260] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 1, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } }
I20250114 20:58:19.346809 24059 heartbeater.cc:502] Master 127.19.228.190:37239 was elected leader, sending a full tablet report...
I20250114 20:58:19.347223 24077 ts_tablet_manager.cc:1428] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20250114 20:58:19.348994 23854 tablet_copy_service.cc:342] P c8c4066502a34b6d81d38fa171dc8224: Request end of tablet copy session 53a9d98d64884fd48f7a32d552a06c38-d951ccf53e8e4e97ba4d353fc8a6e577 received from {username='slave'} at 127.0.0.1:38526
I20250114 20:58:19.349418 23854 tablet_copy_service.cc:434] P c8c4066502a34b6d81d38fa171dc8224: ending tablet copy session 53a9d98d64884fd48f7a32d552a06c38-d951ccf53e8e4e97ba4d353fc8a6e577 on tablet d951ccf53e8e4e97ba4d353fc8a6e577 with peer 53a9d98d64884fd48f7a32d552a06c38
I20250114 20:58:19.357403 23975 raft_consensus.cc:988] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: : Instructing follower e67826202a504b209158ff0057d8fb38 to start an election
I20250114 20:58:19.357863 24068 raft_consensus.cc:1076] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Signalling peer e67826202a504b209158ff0057d8fb38 to start an election
I20250114 20:58:19.359372 23769 tablet_service.cc:1939] Received Run Leader Election RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb"
dest_uuid: "e67826202a504b209158ff0057d8fb38"
 from {username='slave'} at 127.0.0.1:53592
I20250114 20:58:19.359947 23769 raft_consensus.cc:491] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250114 20:58:19.360286 23769 raft_consensus.cc:3054] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:19.364596 23769 raft_consensus.cc:513] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.367090 23844 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb" candidate_uuid: "e67826202a504b209158ff0057d8fb38" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "c8c4066502a34b6d81d38fa171dc8224"
I20250114 20:58:19.367202 23843 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "35bf8c4277f64c7aad4248f75e13ced6" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "c8c4066502a34b6d81d38fa171dc8224"
I20250114 20:58:19.367372 23918 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "9848d7b9c87a4afc8f1310c6874f0ceb" candidate_uuid: "e67826202a504b209158ff0057d8fb38" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "35bf8c4277f64c7aad4248f75e13ced6"
I20250114 20:58:19.367748 23844 raft_consensus.cc:3049] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Stepping down as leader of term 1
I20250114 20:58:19.367857 23843 raft_consensus.cc:3049] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Stepping down as leader of term 1
I20250114 20:58:19.367970 23918 raft_consensus.cc:3054] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:19.368278 23843 raft_consensus.cc:738] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Becoming Follower/Learner. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Running, Role: LEADER
I20250114 20:58:19.368208 23844 raft_consensus.cc:738] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Becoming Follower/Learner. State: Replica: c8c4066502a34b6d81d38fa171dc8224, State: Running, Role: LEADER
I20250114 20:58:19.369253 23844 consensus_queue.cc:260] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.369205 23843 consensus_queue.cc:260] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.370395 23844 raft_consensus.cc:3054] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:19.371268 23843 raft_consensus.cc:3054] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:19.374209 23918 raft_consensus.cc:2463] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e67826202a504b209158ff0057d8fb38 in term 2.
I20250114 20:58:19.375633 23733 leader_election.cc:304] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, e67826202a504b209158ff0057d8fb38; no voters: 
I20250114 20:58:19.376001 23768 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "f021c853c1ae4e989f8195d92a59bd8b" candidate_uuid: "35bf8c4277f64c7aad4248f75e13ced6" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "e67826202a504b209158ff0057d8fb38"
I20250114 20:58:19.376431 23966 raft_consensus.cc:2798] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:58:19.376583 23768 raft_consensus.cc:3054] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:19.377652 23843 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 35bf8c4277f64c7aad4248f75e13ced6 in term 2.
I20250114 20:58:19.378297 23844 raft_consensus.cc:2463] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e67826202a504b209158ff0057d8fb38 in term 2.
I20250114 20:58:19.379801 23966 raft_consensus.cc:695] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 LEADER]: Becoming Leader. State: Replica: e67826202a504b209158ff0057d8fb38, State: Running, Role: LEADER
I20250114 20:58:19.380118 23883 leader_election.cc:304] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35bf8c4277f64c7aad4248f75e13ced6, c8c4066502a34b6d81d38fa171dc8224; no voters: 
I20250114 20:58:19.381134 23972 raft_consensus.cc:2798] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:58:19.380765 23966 consensus_queue.cc:237] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.374619 23769 leader_election.cc:290] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [CANDIDATE]: Term 2 election: Requested vote from peers c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130:44507), 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951)
I20250114 20:58:19.382230 23768 raft_consensus.cc:2463] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 35bf8c4277f64c7aad4248f75e13ced6 in term 2.
I20250114 20:58:19.383241 23972 raft_consensus.cc:695] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 LEADER]: Becoming Leader. State: Replica: 35bf8c4277f64c7aad4248f75e13ced6, State: Running, Role: LEADER
I20250114 20:58:19.384121 23972 consensus_queue.cc:237] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.388119 23666 catalog_manager.cc:5526] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 reported cstate change: term changed from 1 to 2, leader changed from c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130) to e67826202a504b209158ff0057d8fb38 (127.19.228.129). New cstate: current_term: 2 leader_uuid: "e67826202a504b209158ff0057d8fb38" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:19.389595 23667 catalog_manager.cc:5526] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 reported cstate change: term changed from 1 to 2, leader changed from c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130) to 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131). New cstate: current_term: 2 leader_uuid: "35bf8c4277f64c7aad4248f75e13ced6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } health_report { overall_health: HEALTHY } } }
I20250114 20:58:19.541163 24034 raft_consensus.cc:1212] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 LEARNER]: Deduplicated request from leader. Original: 1.1->[1.2-1.2]   Dedup: 1.2->[]
I20250114 20:58:19.541796 24034 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 LEARNER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:19.595979 23984 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
W20250114 20:58:19.614288 23715 auto_rebalancer.cc:254] failed to send replica move request: Illegal state: Leader has not yet committed an operation in its own term
I20250114 20:58:19.619215 23844 raft_consensus.cc:1270] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Refusing update from remote peer 35bf8c4277f64c7aad4248f75e13ced6: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
W20250114 20:58:19.620288 23715 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet f021c853c1ae4e989f8195d92a59bd8b, TS c8c4066502a34b6d81d38fa171dc8224 -> TS 53a9d98d64884fd48f7a32d552a06c38 move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
W20250114 20:58:19.620720 23715 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet f021c853c1ae4e989f8195d92a59bd8b, TS c8c4066502a34b6d81d38fa171dc8224 -> TS 53a9d98d64884fd48f7a32d552a06c38 move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } }
I20250114 20:58:19.620391 23972 consensus_queue.cc:1035] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:58:19.625459 23768 raft_consensus.cc:1270] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Refusing update from remote peer 35bf8c4277f64c7aad4248f75e13ced6: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:58:19.626734 23972 consensus_queue.cc:1035] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:58:19.645710 23844 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:58:19.646948 23966 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:58:19.753103 23918 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:58:19.754212 23966 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:58:19.859169 23971 raft_consensus.cc:1059] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: attempting to promote NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 to VOTER
I20250114 20:58:19.861109 23971 consensus_queue.cc:237] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } }
I20250114 20:58:19.865754 23918 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:19.866439 23768 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:19.865756 24034 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 LEARNER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:19.867228 23971 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:19.868078 23983 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:19.868934 23984 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:19.878046 23768 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:19.878930 23918 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:19.889667 23667 catalog_manager.cc:5526] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 reported cstate change: config changed from index 2 to 3, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "c8c4066502a34b6d81d38fa171dc8224" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.084640 23984 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.092502 24034 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } attrs { replace: true } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.098841 23843 consensus_queue.cc:237] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } }
I20250114 20:58:20.104112 24033 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 FOLLOWER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:20.104650 23768 raft_consensus.cc:1270] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Refusing update from remote peer c8c4066502a34b6d81d38fa171dc8224: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:20.106025 24068 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:20.106803 23983 consensus_queue.cc:1035] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:20.114106 23768 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.116626 24033 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.124514 23666 catalog_manager.cc:5526] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 reported cstate change: config changed from index 3 to 4, VOTER 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131) evicted. New cstate: current_term: 1 leader_uuid: "c8c4066502a34b6d81d38fa171dc8224" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.276718 23906 tablet_service.cc:1514] Processing DeleteTablet for tablet d951ccf53e8e4e97ba4d353fc8a6e577 with delete_type TABLET_DATA_TOMBSTONED (TS 35bf8c4277f64c7aad4248f75e13ced6 not found in new config with opid_index 4) from {username='slave'} at 127.0.0.1:52658
I20250114 20:58:20.278398 24095 tablet_replica.cc:331] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: stopping tablet replica
I20250114 20:58:20.279268 24095 raft_consensus.cc:2238] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:20.280138 24095 raft_consensus.cc:2267] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:20.283138 24095 ts_tablet_manager.cc:1905] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:58:20.293998 24095 ts_tablet_manager.cc:1918] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.3
I20250114 20:58:20.294274 24095 log.cc:1198] T d951ccf53e8e4e97ba4d353fc8a6e577 P 35bf8c4277f64c7aad4248f75e13ced6: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-2-root/wals/d951ccf53e8e4e97ba4d353fc8a6e577
I20250114 20:58:20.295405 23652 catalog_manager.cc:4872] TS 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131:44951): tablet d951ccf53e8e4e97ba4d353fc8a6e577 (table test-workload [id=90f4338c9b884eb09e81c814b7efa307]) successfully deleted
I20250114 20:58:20.940084 23984 raft_consensus.cc:2949] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 35bf8c4277f64c7aad4248f75e13ced6 (127.19.228.131) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:20.945848 23653 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet d951ccf53e8e4e97ba4d353fc8a6e577 with cas_config_opid_index 3: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:58:21.015826 23720 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:21.018263 23720 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:21.018604 23720 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:21.968416 23768 consensus_queue.cc:237] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } }
I20250114 20:58:21.975389 23918 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:58:21.976933 23843 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:58:21.977447 24099 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:21.978299 24100 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
W20250114 20:58:21.985667 23732 consensus_peers.cc:487] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 -> Peer 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132:39323): Couldn't send request to peer 53a9d98d64884fd48f7a32d552a06c38. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 9848d7b9c87a4afc8f1310c6874f0ceb. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:21.988024 23918 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:21.988623 23843 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:21.999384 23667 catalog_manager.cc:5526] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 reported cstate change: config changed from index -1 to 3, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New cstate: current_term: 2 leader_uuid: "e67826202a504b209158ff0057d8fb38" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:22.019747 24099 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 LEADER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } } }
I20250114 20:58:22.335305 24111 ts_tablet_manager.cc:927] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: Initiating tablet copy from peer e67826202a504b209158ff0057d8fb38 (127.19.228.129:41173)
I20250114 20:58:22.336941 24111 tablet_copy_client.cc:323] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.129:41173
I20250114 20:58:22.346093 23779 tablet_copy_service.cc:140] P e67826202a504b209158ff0057d8fb38: Received BeginTabletCopySession request for tablet 9848d7b9c87a4afc8f1310c6874f0ceb from peer 53a9d98d64884fd48f7a32d552a06c38 ({username='slave'} at 127.0.0.1:53628)
I20250114 20:58:22.346536 23779 tablet_copy_service.cc:161] P e67826202a504b209158ff0057d8fb38: Beginning new tablet copy session on tablet 9848d7b9c87a4afc8f1310c6874f0ceb from peer 53a9d98d64884fd48f7a32d552a06c38 at {username='slave'} at 127.0.0.1:53628: session id = 53a9d98d64884fd48f7a32d552a06c38-9848d7b9c87a4afc8f1310c6874f0ceb
I20250114 20:58:22.351296 23779 tablet_copy_source_session.cc:215] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:58:22.353873 24111 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9848d7b9c87a4afc8f1310c6874f0ceb. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:22.365399 24111 tablet_copy_client.cc:806] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Starting download of 0 data blocks...
I20250114 20:58:22.365984 24111 tablet_copy_client.cc:670] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Starting download of 1 WAL segments...
I20250114 20:58:22.369027 24111 tablet_copy_client.cc:538] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:58:22.375058 24111 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: Bootstrap starting.
I20250114 20:58:22.391811 24111 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: Bootstrap replayed 1/1 log segments. Stats: ops{read=3 overwritten=0 applied=3 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:58:22.392594 24111 tablet_bootstrap.cc:492] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: Bootstrap complete.
I20250114 20:58:22.393209 24111 ts_tablet_manager.cc:1397] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: Time spent bootstrapping tablet: real 0.018s	user 0.016s	sys 0.004s
I20250114 20:58:22.395473 24111 raft_consensus.cc:357] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } }
I20250114 20:58:22.396142 24111 raft_consensus.cc:738] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 53a9d98d64884fd48f7a32d552a06c38, State: Initialized, Role: LEARNER
I20250114 20:58:22.396726 24111 consensus_queue.cc:260] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 3, Last appended: 2.3, Last appended by leader: 3, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: true } }
I20250114 20:58:22.399403 24111 ts_tablet_manager.cc:1428] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: Time spent starting tablet: real 0.006s	user 0.006s	sys 0.001s
I20250114 20:58:22.400875 23779 tablet_copy_service.cc:342] P e67826202a504b209158ff0057d8fb38: Request end of tablet copy session 53a9d98d64884fd48f7a32d552a06c38-9848d7b9c87a4afc8f1310c6874f0ceb received from {username='slave'} at 127.0.0.1:53628
I20250114 20:58:22.401262 23779 tablet_copy_service.cc:434] P e67826202a504b209158ff0057d8fb38: ending tablet copy session 53a9d98d64884fd48f7a32d552a06c38-9848d7b9c87a4afc8f1310c6874f0ceb on tablet 9848d7b9c87a4afc8f1310c6874f0ceb with peer 53a9d98d64884fd48f7a32d552a06c38
I20250114 20:58:22.626051 24033 raft_consensus.cc:1212] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.2->[2.3-2.3]   Dedup: 2.3->[]
I20250114 20:58:23.019346 23720 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:23.022217 23720 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:23.022656 23720 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:23.133170 24099 raft_consensus.cc:1059] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: attempting to promote NON_VOTER 53a9d98d64884fd48f7a32d552a06c38 to VOTER
I20250114 20:58:23.135416 24099 consensus_queue.cc:237] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 2.3, Last appended by leader: 1, Current term: 2, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } }
I20250114 20:58:23.142032 24033 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 LEARNER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:58:23.142189 23918 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:58:23.143258 23843 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:58:23.143692 24117 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:23.144526 24099 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:23.145459 24108 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:23.155495 24033 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.157666 23918 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.169605 23666 catalog_manager.cc:5526] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 reported cstate change: config changed from index 3 to 4, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "e67826202a504b209158ff0057d8fb38" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.340168 24108 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 LEADER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.348153 23843 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, 53a9d98d64884fd48f7a32d552a06c38 (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "c8c4066502a34b6d81d38fa171dc8224" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 44507 } attrs { replace: true } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.478785 23768 consensus_queue.cc:237] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 2.4, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } }
I20250114 20:58:23.483644 24033 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 4. Preceding OpId from leader: term: 2 index: 5. (index mismatch)
I20250114 20:58:23.484304 23918 raft_consensus.cc:1270] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Refusing update from remote peer e67826202a504b209158ff0057d8fb38: Log matching property violated. Preceding OpId in replica: term: 2 index: 4. Preceding OpId from leader: term: 2 index: 5. (index mismatch)
I20250114 20:58:23.485028 24108 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:58:23.485750 24117 consensus_queue.cc:1035] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:58:23.494227 24033 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 FOLLOWER]: Committing config change with OpId 2.5: config changed from index 4 to 5, VOTER c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130) evicted. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.494731 23918 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Committing config change with OpId 2.5: config changed from index 4 to 5, VOTER c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130) evicted. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.505918 23664 catalog_manager.cc:5526] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 reported cstate change: config changed from index 4 to 5, VOTER c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130) evicted. New cstate: current_term: 2 leader_uuid: "e67826202a504b209158ff0057d8fb38" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.616561 24099 raft_consensus.cc:2949] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 LEADER]: Committing config change with OpId 2.5: config changed from index 4 to 5, VOTER c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130) evicted. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "e67826202a504b209158ff0057d8fb38" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 41173 } } peers { permanent_uuid: "35bf8c4277f64c7aad4248f75e13ced6" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 44951 } } peers { permanent_uuid: "53a9d98d64884fd48f7a32d552a06c38" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 39323 } attrs { promote: false } } }
I20250114 20:58:23.623306 23653 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 9848d7b9c87a4afc8f1310c6874f0ceb with cas_config_opid_index 4: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:58:23.763870 23832 tablet_service.cc:1514] Processing DeleteTablet for tablet 9848d7b9c87a4afc8f1310c6874f0ceb with delete_type TABLET_DATA_TOMBSTONED (TS c8c4066502a34b6d81d38fa171dc8224 not found in new config with opid_index 5) from {username='slave'} at 127.0.0.1:38476
I20250114 20:58:23.765803 24125 tablet_replica.cc:331] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: stopping tablet replica
I20250114 20:58:23.766408 24125 raft_consensus.cc:2238] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:23.766952 24125 raft_consensus.cc:2267] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:23.769052 24125 ts_tablet_manager.cc:1905] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:58:23.778834 24125 ts_tablet_manager.cc:1918] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.4
I20250114 20:58:23.779117 24125 log.cc:1198] T 9848d7b9c87a4afc8f1310c6874f0ceb P c8c4066502a34b6d81d38fa171dc8224: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.AutoRebalancingUnstableCluster.1736888194149553-20370-0/minicluster-data/ts-1-root/wals/9848d7b9c87a4afc8f1310c6874f0ceb
I20250114 20:58:23.780272 23653 catalog_manager.cc:4872] TS c8c4066502a34b6d81d38fa171dc8224 (127.19.228.130:44507): tablet 9848d7b9c87a4afc8f1310c6874f0ceb (table test-workload [id=90f4338c9b884eb09e81c814b7efa307]) successfully deleted
I20250114 20:58:25.024067 23720 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:25.027410 23720 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:25.027822 23720 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:27.028661 23720 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:27.031381 23720 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:27.031719 23720 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:27.917254 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:58:27.938200 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:27.939090 20370 tablet_replica.cc:331] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38: stopping tablet replica
I20250114 20:58:27.939761 20370 raft_consensus.cc:2238] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:27.940737 20370 raft_consensus.cc:2267] T d951ccf53e8e4e97ba4d353fc8a6e577 P e67826202a504b209158ff0057d8fb38 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:27.942888 20370 tablet_replica.cc:331] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38: stopping tablet replica
I20250114 20:58:27.943329 20370 raft_consensus.cc:2238] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:27.943750 20370 raft_consensus.cc:2267] T f021c853c1ae4e989f8195d92a59bd8b P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:27.945649 20370 tablet_replica.cc:331] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38: stopping tablet replica
I20250114 20:58:27.946060 20370 raft_consensus.cc:2238] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:58:27.946870 20370 raft_consensus.cc:2267] T 9848d7b9c87a4afc8f1310c6874f0ceb P e67826202a504b209158ff0057d8fb38 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:27.969798 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:58:27.984344 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:58:28.003989 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:28.004598 20370 tablet_replica.cc:331] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224: stopping tablet replica
I20250114 20:58:28.005146 20370 raft_consensus.cc:2238] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:28.005617 20370 raft_consensus.cc:2267] T f021c853c1ae4e989f8195d92a59bd8b P c8c4066502a34b6d81d38fa171dc8224 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.007575 20370 tablet_replica.cc:331] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224: stopping tablet replica
I20250114 20:58:28.007998 20370 raft_consensus.cc:2238] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:28.008816 20370 raft_consensus.cc:2267] T d951ccf53e8e4e97ba4d353fc8a6e577 P c8c4066502a34b6d81d38fa171dc8224 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.031327 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:58:28.044171 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:58:28.062057 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:28.062635 20370 tablet_replica.cc:331] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6: stopping tablet replica
I20250114 20:58:28.063158 20370 raft_consensus.cc:2238] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:58:28.064021 20370 raft_consensus.cc:2267] T f021c853c1ae4e989f8195d92a59bd8b P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.065951 20370 tablet_replica.cc:331] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6: stopping tablet replica
I20250114 20:58:28.066401 20370 raft_consensus.cc:2238] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:28.067310 20370 raft_consensus.cc:2267] T 9848d7b9c87a4afc8f1310c6874f0ceb P 35bf8c4277f64c7aad4248f75e13ced6 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.088304 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:58:28.100237 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:58:28.115751 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:28.116417 20370 tablet_replica.cc:331] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38: stopping tablet replica
I20250114 20:58:28.116986 20370 raft_consensus.cc:2238] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:28.117617 20370 raft_consensus.cc:2267] T 9848d7b9c87a4afc8f1310c6874f0ceb P 53a9d98d64884fd48f7a32d552a06c38 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.119472 20370 tablet_replica.cc:331] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38: stopping tablet replica
I20250114 20:58:28.119936 20370 raft_consensus.cc:2238] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:28.120582 20370 raft_consensus.cc:2267] T d951ccf53e8e4e97ba4d353fc8a6e577 P 53a9d98d64884fd48f7a32d552a06c38 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.139523 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
I20250114 20:58:28.149924 20370 master.cc:537] Master@127.19.228.190:37239 shutting down...
I20250114 20:58:28.164366 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:28.164894 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:28.165176 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P c5f8655384374b88ae9f5b1f2cd337e7: stopping tablet replica
I20250114 20:58:28.182482 20370 master.cc:559] Master@127.19.228.190:37239 shutdown complete.
[       OK ] AutoRebalancerTest.AutoRebalancingUnstableCluster (11372 ms)
[ RUN      ] AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor
I20250114 20:58:28.215178 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:40897
I20250114 20:58:28.216230 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:28.221633 24128 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:28.222126 24129 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:28.222756 24131 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:28.223243 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:28.223942 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:28.224134 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:28.224249 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888308224239 us; error 0 us; skew 500 ppm
I20250114 20:58:28.224665 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:28.226917 20370 webserver.cc:458] Webserver started at http://127.19.228.190:40199/ using document root <none> and password file <none>
I20250114 20:58:28.227309 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:28.227453 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:28.227689 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:28.228844 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "900d8335f8a14655bb66bd7ae3ed2b2c"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.233341 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.001s
I20250114 20:58:28.236266 24136 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.236951 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:58:28.237193 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "900d8335f8a14655bb66bd7ae3ed2b2c"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.237427 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:28.259829 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:28.260890 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:28.294898 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:40897
I20250114 20:58:28.294977 24187 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:40897 every 8 connection(s)
I20250114 20:58:28.298555 24188 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:28.308326 24188 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c: Bootstrap starting.
I20250114 20:58:28.312458 24188 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:28.316080 24188 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c: No bootstrap required, opened a new log
I20250114 20:58:28.317902 24188 raft_consensus.cc:357] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" member_type: VOTER }
I20250114 20:58:28.318331 24188 raft_consensus.cc:383] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:28.318548 24188 raft_consensus.cc:738] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 900d8335f8a14655bb66bd7ae3ed2b2c, State: Initialized, Role: FOLLOWER
I20250114 20:58:28.319048 24188 consensus_queue.cc:260] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" member_type: VOTER }
I20250114 20:58:28.319460 24188 raft_consensus.cc:397] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:58:28.319669 24188 raft_consensus.cc:491] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:58:28.319926 24188 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:28.323765 24188 raft_consensus.cc:513] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" member_type: VOTER }
I20250114 20:58:28.324203 24188 leader_election.cc:304] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 900d8335f8a14655bb66bd7ae3ed2b2c; no voters: 
I20250114 20:58:28.325168 24188 leader_election.cc:290] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:58:28.325505 24191 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:28.326675 24191 raft_consensus.cc:695] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 1 LEADER]: Becoming Leader. State: Replica: 900d8335f8a14655bb66bd7ae3ed2b2c, State: Running, Role: LEADER
I20250114 20:58:28.327306 24191 consensus_queue.cc:237] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" member_type: VOTER }
I20250114 20:58:28.327804 24188 sys_catalog.cc:564] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:58:28.330061 24192 sys_catalog.cc:455] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" member_type: VOTER } }
I20250114 20:58:28.330209 24193 sys_catalog.cc:455] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 900d8335f8a14655bb66bd7ae3ed2b2c. Latest consensus state: current_term: 1 leader_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "900d8335f8a14655bb66bd7ae3ed2b2c" member_type: VOTER } }
I20250114 20:58:28.330698 24192 sys_catalog.cc:458] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:28.330757 24193 sys_catalog.cc:458] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:28.338892 24197 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:58:28.345693 24197 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:58:28.346649 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:58:28.353453 24197 catalog_manager.cc:1348] Generated new cluster ID: 2257e7830de5465f91ce6a7eff32c540
I20250114 20:58:28.353719 24197 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:58:28.374730 24197 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:58:28.375952 24197 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:58:28.398378 24197 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c: Generated new TSK 0
I20250114 20:58:28.398929 24197 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:58:28.412494 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:28.418176 24209 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:28.418922 24210 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:28.419981 24212 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:28.420655 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:28.421329 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:28.421506 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:28.421644 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888308421629 us; error 0 us; skew 500 ppm
I20250114 20:58:28.422098 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:28.424217 20370 webserver.cc:458] Webserver started at http://127.19.228.129:42559/ using document root <none> and password file <none>
I20250114 20:58:28.424649 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:28.424808 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:28.425041 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:28.426019 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "30304c22e641474faea863959af04a20"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.430280 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.002s	sys 0.004s
I20250114 20:58:28.433156 24217 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.433804 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.000s
I20250114 20:58:28.434052 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "30304c22e641474faea863959af04a20"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.434295 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:28.458297 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:28.459345 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:28.460700 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:28.462759 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:28.462929 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.463155 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:28.463290 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.499122 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:42747
I20250114 20:58:28.499203 24279 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:42747 every 8 connection(s)
I20250114 20:58:28.503278 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:28.510830 24284 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:28.511353 24285 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:28.512991 20370 server_base.cc:1034] running on GCE node
W20250114 20:58:28.513602 24287 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:28.514487 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:28.514739 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:28.514930 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888308514912 us; error 0 us; skew 500 ppm
I20250114 20:58:28.515486 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:28.515976 24280 heartbeater.cc:346] Connected to a master server at 127.19.228.190:40897
I20250114 20:58:28.516278 24280 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:28.516913 24280 heartbeater.cc:510] Master 127.19.228.190:40897 requested a full tablet report, sending...
I20250114 20:58:28.518136 20370 webserver.cc:458] Webserver started at http://127.19.228.130:40137/ using document root <none> and password file <none>
I20250114 20:58:28.518570 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:28.518601 24153 ts_manager.cc:194] Registered new tserver with Master: 30304c22e641474faea863959af04a20 (127.19.228.129:42747)
I20250114 20:58:28.518821 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:28.519147 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:28.520258 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "a6f61889cc094a6b88e8b64cb2136cd2"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.520370 24153 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:38846
I20250114 20:58:28.524489 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.001s
I20250114 20:58:28.527441 24292 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.528121 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:28.528374 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "a6f61889cc094a6b88e8b64cb2136cd2"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.528650 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:28.540724 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:28.541725 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:28.542989 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:28.545022 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:28.545198 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.545393 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:28.545531 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.580521 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:41449
I20250114 20:58:28.580603 24354 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:41449 every 8 connection(s)
I20250114 20:58:28.584789 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:28.591617 24358 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:28.592417 24359 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:28.595229 24355 heartbeater.cc:346] Connected to a master server at 127.19.228.190:40897
W20250114 20:58:28.595371 24361 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:28.595568 24355 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:28.596089 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:28.596495 24355 heartbeater.cc:510] Master 127.19.228.190:40897 requested a full tablet report, sending...
I20250114 20:58:28.596858 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:28.597079 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:28.597263 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888308597245 us; error 0 us; skew 500 ppm
I20250114 20:58:28.597860 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:28.598346 24153 ts_manager.cc:194] Registered new tserver with Master: a6f61889cc094a6b88e8b64cb2136cd2 (127.19.228.130:41449)
I20250114 20:58:28.600100 24153 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:38858
I20250114 20:58:28.600427 20370 webserver.cc:458] Webserver started at http://127.19.228.131:35467/ using document root <none> and password file <none>
I20250114 20:58:28.600984 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:28.601163 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:28.601459 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:28.602742 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "25a5b878f3174b429b6e8b3b9424bbe9"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.606686 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250114 20:58:28.609609 24366 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.610276 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:58:28.610520 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "25a5b878f3174b429b6e8b3b9424bbe9"
format_stamp: "Formatted at 2025-01-14 20:58:28 on dist-test-slave-kc3q"
I20250114 20:58:28.610759 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:28.620155 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:28.621097 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:28.622324 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:28.624397 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:28.624595 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.624807 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:28.624948 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:28.660085 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:37499
I20250114 20:58:28.660176 24428 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:37499 every 8 connection(s)
I20250114 20:58:28.671823 24429 heartbeater.cc:346] Connected to a master server at 127.19.228.190:40897
I20250114 20:58:28.672152 24429 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:28.672773 24429 heartbeater.cc:510] Master 127.19.228.190:40897 requested a full tablet report, sending...
I20250114 20:58:28.674372 24153 ts_manager.cc:194] Registered new tserver with Master: 25a5b878f3174b429b6e8b3b9424bbe9 (127.19.228.131:37499)
I20250114 20:58:28.674960 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012037234s
I20250114 20:58:28.675921 24153 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:38862
I20250114 20:58:29.522727 24280 heartbeater.cc:502] Master 127.19.228.190:40897 was elected leader, sending a full tablet report...
I20250114 20:58:29.602710 24355 heartbeater.cc:502] Master 127.19.228.190:40897 was elected leader, sending a full tablet report...
I20250114 20:58:29.678421 24429 heartbeater.cc:502] Master 127.19.228.190:40897 was elected leader, sending a full tablet report...
I20250114 20:58:29.706857 20370 test_util.cc:274] Using random seed: -759762859
I20250114 20:58:29.727151 24153 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:38876:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:58:29.729077 24153 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:58:29.765938 24394 tablet_service.cc:1467] Processing CreateTablet for tablet c18817afb7f94d59ac95281442a6d322 (DEFAULT_TABLE table=test-workload [id=97ee3a71d1e84bce8212184125069ed3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:58:29.766471 24245 tablet_service.cc:1467] Processing CreateTablet for tablet c18817afb7f94d59ac95281442a6d322 (DEFAULT_TABLE table=test-workload [id=97ee3a71d1e84bce8212184125069ed3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:58:29.767244 24394 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c18817afb7f94d59ac95281442a6d322. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:29.767661 24245 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c18817afb7f94d59ac95281442a6d322. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:29.771379 24320 tablet_service.cc:1467] Processing CreateTablet for tablet c18817afb7f94d59ac95281442a6d322 (DEFAULT_TABLE table=test-workload [id=97ee3a71d1e84bce8212184125069ed3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250114 20:58:29.772521 24320 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c18817afb7f94d59ac95281442a6d322. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:29.782445 24449 tablet_bootstrap.cc:492] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9: Bootstrap starting.
I20250114 20:58:29.786942 24449 tablet_bootstrap.cc:654] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:29.788214 24451 tablet_bootstrap.cc:492] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: Bootstrap starting.
I20250114 20:58:29.788844 24450 tablet_bootstrap.cc:492] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20: Bootstrap starting.
I20250114 20:58:29.792946 24449 tablet_bootstrap.cc:492] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9: No bootstrap required, opened a new log
I20250114 20:58:29.793430 24449 ts_tablet_manager.cc:1397] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9: Time spent bootstrapping tablet: real 0.011s	user 0.004s	sys 0.004s
I20250114 20:58:29.794275 24451 tablet_bootstrap.cc:654] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:29.794826 24450 tablet_bootstrap.cc:654] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:29.796094 24449 raft_consensus.cc:357] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.796859 24449 raft_consensus.cc:383] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:29.797220 24449 raft_consensus.cc:738] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 25a5b878f3174b429b6e8b3b9424bbe9, State: Initialized, Role: FOLLOWER
I20250114 20:58:29.797984 24449 consensus_queue.cc:260] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.799865 24450 tablet_bootstrap.cc:492] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20: No bootstrap required, opened a new log
I20250114 20:58:29.800379 24450 ts_tablet_manager.cc:1397] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20: Time spent bootstrapping tablet: real 0.012s	user 0.005s	sys 0.004s
I20250114 20:58:29.799842 24451 tablet_bootstrap.cc:492] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: No bootstrap required, opened a new log
I20250114 20:58:29.801106 24451 ts_tablet_manager.cc:1397] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: Time spent bootstrapping tablet: real 0.013s	user 0.008s	sys 0.003s
I20250114 20:58:29.801681 24449 ts_tablet_manager.cc:1428] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9: Time spent starting tablet: real 0.008s	user 0.009s	sys 0.000s
I20250114 20:58:29.802743 24450 raft_consensus.cc:357] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.803474 24450 raft_consensus.cc:383] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:29.803395 24451 raft_consensus.cc:357] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.803881 24450 raft_consensus.cc:738] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 30304c22e641474faea863959af04a20, State: Initialized, Role: FOLLOWER
I20250114 20:58:29.804103 24451 raft_consensus.cc:383] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:29.804435 24451 raft_consensus.cc:738] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a6f61889cc094a6b88e8b64cb2136cd2, State: Initialized, Role: FOLLOWER
I20250114 20:58:29.804623 24450 consensus_queue.cc:260] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.805043 24451 consensus_queue.cc:260] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.808506 24451 ts_tablet_manager.cc:1428] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: Time spent starting tablet: real 0.007s	user 0.005s	sys 0.000s
I20250114 20:58:29.811404 24450 ts_tablet_manager.cc:1428] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20: Time spent starting tablet: real 0.011s	user 0.005s	sys 0.004s
W20250114 20:58:29.834936 24356 tablet.cc:2367] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250114 20:58:29.927728 24455 raft_consensus.cc:491] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:29.928184 24455 raft_consensus.cc:513] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.930013 24455 leader_election.cc:290] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 30304c22e641474faea863959af04a20 (127.19.228.129:42747), a6f61889cc094a6b88e8b64cb2136cd2 (127.19.228.130:41449)
I20250114 20:58:29.939410 24330 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "c18817afb7f94d59ac95281442a6d322" candidate_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" is_pre_election: true
I20250114 20:58:29.939350 24255 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "c18817afb7f94d59ac95281442a6d322" candidate_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "30304c22e641474faea863959af04a20" is_pre_election: true
I20250114 20:58:29.940177 24330 raft_consensus.cc:2463] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 25a5b878f3174b429b6e8b3b9424bbe9 in term 0.
I20250114 20:58:29.940277 24255 raft_consensus.cc:2463] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 25a5b878f3174b429b6e8b3b9424bbe9 in term 0.
I20250114 20:58:29.941149 24369 leader_election.cc:304] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 25a5b878f3174b429b6e8b3b9424bbe9, a6f61889cc094a6b88e8b64cb2136cd2; no voters: 
I20250114 20:58:29.941774 24455 raft_consensus.cc:2798] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:29.942096 24455 raft_consensus.cc:491] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:29.942337 24455 raft_consensus.cc:3054] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:29.946774 24455 raft_consensus.cc:513] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.948137 24455 leader_election.cc:290] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [CANDIDATE]: Term 1 election: Requested vote from peers 30304c22e641474faea863959af04a20 (127.19.228.129:42747), a6f61889cc094a6b88e8b64cb2136cd2 (127.19.228.130:41449)
I20250114 20:58:29.948845 24255 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "c18817afb7f94d59ac95281442a6d322" candidate_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "30304c22e641474faea863959af04a20"
I20250114 20:58:29.949008 24330 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "c18817afb7f94d59ac95281442a6d322" candidate_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a6f61889cc094a6b88e8b64cb2136cd2"
I20250114 20:58:29.949297 24255 raft_consensus.cc:3054] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:29.949498 24330 raft_consensus.cc:3054] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:29.953863 24255 raft_consensus.cc:2463] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 25a5b878f3174b429b6e8b3b9424bbe9 in term 1.
I20250114 20:58:29.954030 24330 raft_consensus.cc:2463] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 25a5b878f3174b429b6e8b3b9424bbe9 in term 1.
I20250114 20:58:29.954826 24367 leader_election.cc:304] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 25a5b878f3174b429b6e8b3b9424bbe9, 30304c22e641474faea863959af04a20; no voters: 
I20250114 20:58:29.955494 24455 raft_consensus.cc:2798] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:29.956624 24455 raft_consensus.cc:695] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 1 LEADER]: Becoming Leader. State: Replica: 25a5b878f3174b429b6e8b3b9424bbe9, State: Running, Role: LEADER
I20250114 20:58:29.957408 24455 consensus_queue.cc:237] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } }
I20250114 20:58:29.963811 24152 catalog_manager.cc:5526] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 reported cstate change: term changed from 0 to 1, leader changed from <none> to 25a5b878f3174b429b6e8b3b9424bbe9 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "25a5b878f3174b429b6e8b3b9424bbe9" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37499 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "30304c22e641474faea863959af04a20" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 42747 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:30.004766 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:58:30.019198 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:30.019918 20370 tablet_replica.cc:331] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20: stopping tablet replica
I20250114 20:58:30.020390 20370 raft_consensus.cc:2238] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:30.020743 20370 raft_consensus.cc:2267] T c18817afb7f94d59ac95281442a6d322 P 30304c22e641474faea863959af04a20 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:30.037914 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:58:30.341689 24206 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:30.342855 24206 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:30.343153 24206 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:30.467414 24462 consensus_queue.cc:1035] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a6f61889cc094a6b88e8b64cb2136cd2" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 41449 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250114 20:58:30.480389 24367 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111) [suppressed 8 similar messages]
W20250114 20:58:30.483933 24367 consensus_peers.cc:487] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 -> Peer 30304c22e641474faea863959af04a20 (127.19.228.129:42747): Couldn't send request to peer 30304c22e641474faea863959af04a20. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:32.344008 24206 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:32.345324 24206 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:32.345607 24206 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
W20250114 20:58:32.912530 24367 consensus_peers.cc:487] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 -> Peer 30304c22e641474faea863959af04a20 (127.19.228.129:42747): Couldn't send request to peer 30304c22e641474faea863959af04a20. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250114 20:58:34.346457 24206 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:34.347728 24206 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:34.348001 24206 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
W20250114 20:58:35.461277 24367 consensus_peers.cc:487] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 -> Peer 30304c22e641474faea863959af04a20 (127.19.228.129:42747): Couldn't send request to peer 30304c22e641474faea863959af04a20. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
W20250114 20:58:35.995684 24367 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111) [suppressed 10 similar messages]
I20250114 20:58:36.348982 24206 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:36.350258 24206 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:36.350551 24206 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
W20250114 20:58:37.988579 24367 consensus_peers.cc:487] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 -> Peer 30304c22e641474faea863959af04a20 (127.19.228.129:42747): Couldn't send request to peer 30304c22e641474faea863959af04a20. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111). This is attempt 16: this message will repeat every 5th retry.
I20250114 20:58:38.351372 24206 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:38.352596 24206 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:38.352878 24206 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
I20250114 20:58:40.353709 24206 auto_leader_rebalancer.cc:140] leader rebalance for table test-workload
I20250114 20:58:40.354894 24206 auto_leader_rebalancer.cc:388] table: test-workload, leader rebalance finish, leader transfer count: 0
I20250114 20:58:40.355170 24206 auto_leader_rebalancer.cc:450] All tables' leader rebalancing finished this round
W20250114 20:58:40.596274 24367 consensus_peers.cc:487] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 -> Peer 30304c22e641474faea863959af04a20 (127.19.228.129:42747): Couldn't send request to peer 30304c22e641474faea863959af04a20. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:42747: connect: Connection refused (error 111). This is attempt 21: this message will repeat every 5th retry.
I20250114 20:58:41.105577 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
W20250114 20:58:41.118685 24369 proxy.cc:239] Call had error, refreshing address and retrying: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer [suppressed 9 similar messages]
W20250114 20:58:41.122195 24369 consensus_peers.cc:487] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 -> Peer a6f61889cc094a6b88e8b64cb2136cd2 (127.19.228.130:41449): Couldn't send request to peer a6f61889cc094a6b88e8b64cb2136cd2. Status: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:41.131754 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:41.132418 20370 tablet_replica.cc:331] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2: stopping tablet replica
I20250114 20:58:41.132918 20370 raft_consensus.cc:2238] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:41.133517 20370 raft_consensus.cc:2267] T c18817afb7f94d59ac95281442a6d322 P a6f61889cc094a6b88e8b64cb2136cd2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:41.152782 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:58:41.163941 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:58:41.185739 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:41.186359 20370 tablet_replica.cc:331] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9: stopping tablet replica
I20250114 20:58:41.186894 20370 raft_consensus.cc:2238] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:41.187798 20370 raft_consensus.cc:2267] T c18817afb7f94d59ac95281442a6d322 P 25a5b878f3174b429b6e8b3b9424bbe9 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:41.205976 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:58:41.216143 20370 master.cc:537] Master@127.19.228.190:40897 shutting down...
I20250114 20:58:41.232649 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:41.233211 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:41.233491 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 900d8335f8a14655bb66bd7ae3ed2b2c: stopping tablet replica
I20250114 20:58:41.250885 20370 master.cc:559] Master@127.19.228.190:40897 shutdown complete.
[       OK ] AutoRebalancerTest.NoReplicaMovesIfCannotMeetReplicationFactor (13066 ms)
[ RUN      ] AutoRebalancerTest.NoRebalancingIfReplicasRecovering
I20250114 20:58:41.281876 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:41913
I20250114 20:58:41.282905 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:41.288448 24498 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.289176 24499 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.289788 24501 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:41.290293 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:41.290983 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:41.291181 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:41.291319 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888321291302 us; error 0 us; skew 500 ppm
I20250114 20:58:41.291777 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:41.299257 20370 webserver.cc:458] Webserver started at http://127.19.228.190:33079/ using document root <none> and password file <none>
I20250114 20:58:41.299736 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:41.299907 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:41.300150 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:41.301360 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "24eebd20cc4a4fa08e4acc9da426bf70"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.305514 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.002s
I20250114 20:58:41.308454 24506 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.309074 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:41.309314 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "24eebd20cc4a4fa08e4acc9da426bf70"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.309548 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:41.327680 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:41.328908 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:41.365626 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:41913
I20250114 20:58:41.365708 24557 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:41913 every 8 connection(s)
I20250114 20:58:41.369218 24558 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:41.379787 24558 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70: Bootstrap starting.
I20250114 20:58:41.384032 24558 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:41.387923 24558 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70: No bootstrap required, opened a new log
I20250114 20:58:41.389870 24558 raft_consensus.cc:357] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" member_type: VOTER }
I20250114 20:58:41.390327 24558 raft_consensus.cc:383] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:41.390534 24558 raft_consensus.cc:738] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 24eebd20cc4a4fa08e4acc9da426bf70, State: Initialized, Role: FOLLOWER
I20250114 20:58:41.391098 24558 consensus_queue.cc:260] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" member_type: VOTER }
I20250114 20:58:41.391526 24558 raft_consensus.cc:397] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:58:41.391772 24558 raft_consensus.cc:491] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:58:41.392017 24558 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:41.396252 24558 raft_consensus.cc:513] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" member_type: VOTER }
I20250114 20:58:41.396711 24558 leader_election.cc:304] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 24eebd20cc4a4fa08e4acc9da426bf70; no voters: 
I20250114 20:58:41.397711 24558 leader_election.cc:290] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:58:41.398079 24561 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:41.399250 24561 raft_consensus.cc:695] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 1 LEADER]: Becoming Leader. State: Replica: 24eebd20cc4a4fa08e4acc9da426bf70, State: Running, Role: LEADER
I20250114 20:58:41.399902 24561 consensus_queue.cc:237] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" member_type: VOTER }
I20250114 20:58:41.400435 24558 sys_catalog.cc:564] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:58:41.402581 24563 sys_catalog.cc:455] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 24eebd20cc4a4fa08e4acc9da426bf70. Latest consensus state: current_term: 1 leader_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" member_type: VOTER } }
I20250114 20:58:41.402635 24562 sys_catalog.cc:455] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "24eebd20cc4a4fa08e4acc9da426bf70" member_type: VOTER } }
I20250114 20:58:41.403293 24563 sys_catalog.cc:458] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:41.403451 24562 sys_catalog.cc:458] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:41.405673 24566 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:58:41.410363 24566 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:58:41.414199 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:58:41.418617 24566 catalog_manager.cc:1348] Generated new cluster ID: 4c692b4842404883b1acabbcf26a6bd7
I20250114 20:58:41.418810 24566 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:58:41.439647 24566 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:58:41.440846 24566 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:58:41.453233 24566 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70: Generated new TSK 0
I20250114 20:58:41.453720 24566 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:58:41.480127 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:41.485844 24579 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.486794 24580 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.488063 24582 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:41.489091 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:41.489820 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:41.489992 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:41.490116 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888321490100 us; error 0 us; skew 500 ppm
I20250114 20:58:41.490536 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:41.492671 20370 webserver.cc:458] Webserver started at http://127.19.228.129:43123/ using document root <none> and password file <none>
I20250114 20:58:41.493063 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:41.493214 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:41.493410 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:41.494426 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "52fd7c3fdc4f44a78590c14cc0b18814"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.498540 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.002s	sys 0.003s
I20250114 20:58:41.501641 24587 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.502368 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.000s	sys 0.002s
I20250114 20:58:41.502607 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "52fd7c3fdc4f44a78590c14cc0b18814"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.502857 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:41.512681 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:41.513716 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:41.514998 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:41.517118 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:41.517305 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.517513 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:41.517652 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.554111 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:40759
I20250114 20:58:41.554198 24649 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:40759 every 8 connection(s)
I20250114 20:58:41.558358 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:41.565865 24654 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.566488 24655 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.568763 24657 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:41.569994 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:41.570711 24650 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41913
I20250114 20:58:41.570842 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
I20250114 20:58:41.571031 24650 heartbeater.cc:463] Registering TS with master...
W20250114 20:58:41.571089 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:41.571372 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888321571356 us; error 0 us; skew 500 ppm
I20250114 20:58:41.571825 24650 heartbeater.cc:510] Master 127.19.228.190:41913 requested a full tablet report, sending...
I20250114 20:58:41.571909 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:41.573912 24523 ts_manager.cc:194] Registered new tserver with Master: 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759)
I20250114 20:58:41.574630 20370 webserver.cc:458] Webserver started at http://127.19.228.130:43487/ using document root <none> and password file <none>
I20250114 20:58:41.575218 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:41.575441 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:41.575758 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:41.576344 24523 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:35092
I20250114 20:58:41.577237 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "2a4adc6752064f6780cf3140807a7720"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.581719 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250114 20:58:41.584745 24662 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.585402 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.000s	sys 0.003s
I20250114 20:58:41.585646 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "2a4adc6752064f6780cf3140807a7720"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.585908 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:41.601843 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:41.602896 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:41.604259 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:41.606411 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:41.606587 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.606806 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:41.606945 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.642638 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:36615
I20250114 20:58:41.642729 24724 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:36615 every 8 connection(s)
I20250114 20:58:41.647087 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:41.653393 24728 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.654409 24729 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:41.656666 24731 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:41.657327 24725 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41913
I20250114 20:58:41.657375 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:41.657707 24725 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:41.658279 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
I20250114 20:58:41.658385 24725 heartbeater.cc:510] Master 127.19.228.190:41913 requested a full tablet report, sending...
W20250114 20:58:41.658497 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:41.658720 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888321658701 us; error 0 us; skew 500 ppm
I20250114 20:58:41.659238 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:41.660282 24523 ts_manager.cc:194] Registered new tserver with Master: 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615)
I20250114 20:58:41.661734 20370 webserver.cc:458] Webserver started at http://127.19.228.131:45383/ using document root <none> and password file <none>
I20250114 20:58:41.661789 24523 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:35100
I20250114 20:58:41.662218 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:41.662420 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:41.662757 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:41.663911 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "d8849e2fa8d14bb6852f6843d40d9c72"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.667851 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.003s	user 0.002s	sys 0.001s
I20250114 20:58:41.670682 24736 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.671348 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:41.671626 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "d8849e2fa8d14bb6852f6843d40d9c72"
format_stamp: "Formatted at 2025-01-14 20:58:41 on dist-test-slave-kc3q"
I20250114 20:58:41.671880 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:41.691361 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:41.692571 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:41.693835 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:41.695916 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:41.696105 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.696303 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:41.696447 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:41.732951 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:42017
I20250114 20:58:41.733037 24798 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:42017 every 8 connection(s)
I20250114 20:58:41.744908 24799 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41913
I20250114 20:58:41.745213 24799 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:41.745890 24799 heartbeater.cc:510] Master 127.19.228.190:41913 requested a full tablet report, sending...
I20250114 20:58:41.747491 24523 ts_manager.cc:194] Registered new tserver with Master: d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:41.747936 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012069716s
I20250114 20:58:41.748752 24523 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:35102
I20250114 20:58:42.579120 24650 heartbeater.cc:502] Master 127.19.228.190:41913 was elected leader, sending a full tablet report...
I20250114 20:58:42.664062 24725 heartbeater.cc:502] Master 127.19.228.190:41913 was elected leader, sending a full tablet report...
I20250114 20:58:42.751236 24799 heartbeater.cc:502] Master 127.19.228.190:41913 was elected leader, sending a full tablet report...
I20250114 20:58:42.779071 20370 test_util.cc:274] Using random seed: -746690644
I20250114 20:58:42.799634 24523 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:35112:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_""\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:58:42.801862 24523 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:58:42.850091 24690 tablet_service.cc:1467] Processing CreateTablet for tablet 10785666cd15467b839a3054a1f7d54f (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:58:42.851438 24690 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 10785666cd15467b839a3054a1f7d54f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.853319 24687 tablet_service.cc:1467] Processing CreateTablet for tablet aadaffa66a754aeabc6f18152e145030 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:58:42.854583 24687 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet aadaffa66a754aeabc6f18152e145030. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.856606 24615 tablet_service.cc:1467] Processing CreateTablet for tablet 10785666cd15467b839a3054a1f7d54f (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:58:42.857371 24612 tablet_service.cc:1467] Processing CreateTablet for tablet aadaffa66a754aeabc6f18152e145030 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:58:42.857842 24615 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 10785666cd15467b839a3054a1f7d54f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.859036 24613 tablet_service.cc:1467] Processing CreateTablet for tablet 432606a5f90743ceb236987a6ad43e33 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:58:42.859730 24612 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet aadaffa66a754aeabc6f18152e145030. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.860284 24613 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 432606a5f90743ceb236987a6ad43e33. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.861701 24614 tablet_service.cc:1467] Processing CreateTablet for tablet 0c43bb0ab0b1413a826516f7f1980835 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:58:42.862916 24614 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0c43bb0ab0b1413a826516f7f1980835. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.865015 24688 tablet_service.cc:1467] Processing CreateTablet for tablet 432606a5f90743ceb236987a6ad43e33 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:58:42.866232 24688 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 432606a5f90743ceb236987a6ad43e33. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.851081 24689 tablet_service.cc:1467] Processing CreateTablet for tablet 0c43bb0ab0b1413a826516f7f1980835 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:58:42.868796 24689 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0c43bb0ab0b1413a826516f7f1980835. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.874404 24819 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720: Bootstrap starting.
I20250114 20:58:42.876960 24763 tablet_service.cc:1467] Processing CreateTablet for tablet 0c43bb0ab0b1413a826516f7f1980835 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:58:42.875366 24764 tablet_service.cc:1467] Processing CreateTablet for tablet 10785666cd15467b839a3054a1f7d54f (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:58:42.880668 24761 tablet_service.cc:1467] Processing CreateTablet for tablet aadaffa66a754aeabc6f18152e145030 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:58:42.882324 24819 tablet_bootstrap.cc:654] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.882635 24762 tablet_service.cc:1467] Processing CreateTablet for tablet 432606a5f90743ceb236987a6ad43e33 (DEFAULT_TABLE table=test-workload [id=9a970f902d6b4fe5a1845f21f9aaa366]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:58:42.884114 24763 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0c43bb0ab0b1413a826516f7f1980835. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.887475 24764 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 10785666cd15467b839a3054a1f7d54f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.891990 24761 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet aadaffa66a754aeabc6f18152e145030. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.895947 24762 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 432606a5f90743ceb236987a6ad43e33. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:42.921526 24822 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814: Bootstrap starting.
I20250114 20:58:42.927480 24819 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720: No bootstrap required, opened a new log
I20250114 20:58:42.928087 24819 ts_tablet_manager.cc:1397] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720: Time spent bootstrapping tablet: real 0.054s	user 0.016s	sys 0.027s
I20250114 20:58:42.928637 24821 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72: Bootstrap starting.
I20250114 20:58:42.930914 24819 raft_consensus.cc:357] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.931643 24822 tablet_bootstrap.cc:654] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.931671 24819 raft_consensus.cc:383] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.932145 24819 raft_consensus.cc:738] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2a4adc6752064f6780cf3140807a7720, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.932863 24819 consensus_queue.cc:260] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.933449 24821 tablet_bootstrap.cc:654] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.935933 24819 ts_tablet_manager.cc:1428] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720: Time spent starting tablet: real 0.008s	user 0.004s	sys 0.003s
I20250114 20:58:42.936898 24819 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: Bootstrap starting.
I20250114 20:58:42.939020 24821 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72: No bootstrap required, opened a new log
I20250114 20:58:42.939446 24822 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814: No bootstrap required, opened a new log
I20250114 20:58:42.939567 24821 ts_tablet_manager.cc:1397] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72: Time spent bootstrapping tablet: real 0.011s	user 0.010s	sys 0.000s
I20250114 20:58:42.940022 24822 ts_tablet_manager.cc:1397] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent bootstrapping tablet: real 0.019s	user 0.007s	sys 0.006s
I20250114 20:58:42.941900 24821 raft_consensus.cc:357] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.942625 24821 raft_consensus.cc:383] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.942935 24821 raft_consensus.cc:738] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d8849e2fa8d14bb6852f6843d40d9c72, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.943760 24821 consensus_queue.cc:260] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.945122 24819 tablet_bootstrap.cc:654] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.944906 24822 raft_consensus.cc:357] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.945653 24822 raft_consensus.cc:383] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.946544 24822 raft_consensus.cc:738] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 52fd7c3fdc4f44a78590c14cc0b18814, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.947227 24822 consensus_queue.cc:260] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.949065 24821 ts_tablet_manager.cc:1428] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72: Time spent starting tablet: real 0.009s	user 0.004s	sys 0.004s
I20250114 20:58:42.950565 24821 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: Bootstrap starting.
I20250114 20:58:42.951658 24819 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: No bootstrap required, opened a new log
I20250114 20:58:42.952111 24819 ts_tablet_manager.cc:1397] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: Time spent bootstrapping tablet: real 0.015s	user 0.010s	sys 0.002s
I20250114 20:58:42.954551 24819 raft_consensus.cc:357] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.955042 24819 raft_consensus.cc:383] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.955243 24819 raft_consensus.cc:738] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2a4adc6752064f6780cf3140807a7720, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.955914 24819 consensus_queue.cc:260] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.957791 24819 ts_tablet_manager.cc:1428] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:58:42.957923 24822 ts_tablet_manager.cc:1428] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent starting tablet: real 0.016s	user 0.004s	sys 0.002s
I20250114 20:58:42.958593 24819 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720: Bootstrap starting.
I20250114 20:58:42.956210 24821 tablet_bootstrap.cc:654] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.958846 24822 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: Bootstrap starting.
I20250114 20:58:42.964190 24819 tablet_bootstrap.cc:654] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.967818 24822 tablet_bootstrap.cc:654] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.969200 24819 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720: No bootstrap required, opened a new log
I20250114 20:58:42.969754 24819 ts_tablet_manager.cc:1397] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720: Time spent bootstrapping tablet: real 0.011s	user 0.010s	sys 0.001s
I20250114 20:58:42.972456 24819 raft_consensus.cc:357] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.973316 24819 raft_consensus.cc:383] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.973724 24819 raft_consensus.cc:738] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2a4adc6752064f6780cf3140807a7720, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.974610 24819 consensus_queue.cc:260] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.977073 24821 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: No bootstrap required, opened a new log
I20250114 20:58:42.977180 24819 ts_tablet_manager.cc:1428] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720: Time spent starting tablet: real 0.007s	user 0.002s	sys 0.003s
I20250114 20:58:42.977356 24822 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: No bootstrap required, opened a new log
I20250114 20:58:42.977684 24821 ts_tablet_manager.cc:1397] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: Time spent bootstrapping tablet: real 0.027s	user 0.011s	sys 0.008s
I20250114 20:58:42.977823 24822 ts_tablet_manager.cc:1397] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent bootstrapping tablet: real 0.019s	user 0.010s	sys 0.005s
I20250114 20:58:42.978201 24819 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: Bootstrap starting.
I20250114 20:58:42.980531 24821 raft_consensus.cc:357] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.980800 24822 raft_consensus.cc:357] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.981556 24821 raft_consensus.cc:383] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.981711 24822 raft_consensus.cc:383] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:42.981949 24821 raft_consensus.cc:738] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d8849e2fa8d14bb6852f6843d40d9c72, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.982131 24822 raft_consensus.cc:738] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 52fd7c3fdc4f44a78590c14cc0b18814, State: Initialized, Role: FOLLOWER
I20250114 20:58:42.982808 24821 consensus_queue.cc:260] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.983047 24822 consensus_queue.cc:260] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:42.984670 24819 tablet_bootstrap.cc:654] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.985369 24822 ts_tablet_manager.cc:1428] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20250114 20:58:42.986313 24822 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814: Bootstrap starting.
I20250114 20:58:42.988830 24821 ts_tablet_manager.cc:1428] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: Time spent starting tablet: real 0.011s	user 0.005s	sys 0.000s
I20250114 20:58:42.989753 24821 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72: Bootstrap starting.
I20250114 20:58:42.992259 24822 tablet_bootstrap.cc:654] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:42.995420 24821 tablet_bootstrap.cc:654] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:43.002472 24819 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: No bootstrap required, opened a new log
I20250114 20:58:43.002825 24822 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814: No bootstrap required, opened a new log
I20250114 20:58:43.003026 24819 ts_tablet_manager.cc:1397] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: Time spent bootstrapping tablet: real 0.025s	user 0.014s	sys 0.009s
I20250114 20:58:43.003314 24822 ts_tablet_manager.cc:1397] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent bootstrapping tablet: real 0.017s	user 0.013s	sys 0.003s
I20250114 20:58:43.005908 24822 raft_consensus.cc:357] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.005864 24819 raft_consensus.cc:357] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.006620 24822 raft_consensus.cc:383] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:43.006760 24821 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72: No bootstrap required, opened a new log
I20250114 20:58:43.006853 24819 raft_consensus.cc:383] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:43.006947 24822 raft_consensus.cc:738] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 52fd7c3fdc4f44a78590c14cc0b18814, State: Initialized, Role: FOLLOWER
I20250114 20:58:43.007385 24821 ts_tablet_manager.cc:1397] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72: Time spent bootstrapping tablet: real 0.018s	user 0.009s	sys 0.004s
I20250114 20:58:43.007370 24819 raft_consensus.cc:738] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2a4adc6752064f6780cf3140807a7720, State: Initialized, Role: FOLLOWER
I20250114 20:58:43.007782 24822 consensus_queue.cc:260] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.008297 24819 consensus_queue.cc:260] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.009986 24822 ts_tablet_manager.cc:1428] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:58:43.010653 24819 ts_tablet_manager.cc:1428] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20250114 20:58:43.010921 24822 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814: Bootstrap starting.
I20250114 20:58:43.013522 24821 raft_consensus.cc:357] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.014236 24821 raft_consensus.cc:383] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:43.014516 24821 raft_consensus.cc:738] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d8849e2fa8d14bb6852f6843d40d9c72, State: Initialized, Role: FOLLOWER
I20250114 20:58:43.015164 24821 consensus_queue.cc:260] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.016093 24822 tablet_bootstrap.cc:654] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:43.017268 24821 ts_tablet_manager.cc:1428] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72: Time spent starting tablet: real 0.009s	user 0.003s	sys 0.002s
I20250114 20:58:43.017980 24821 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: Bootstrap starting.
I20250114 20:58:43.023456 24821 tablet_bootstrap.cc:654] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:43.030361 24822 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814: No bootstrap required, opened a new log
I20250114 20:58:43.030725 24822 ts_tablet_manager.cc:1397] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent bootstrapping tablet: real 0.020s	user 0.007s	sys 0.011s
I20250114 20:58:43.032275 24821 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: No bootstrap required, opened a new log
I20250114 20:58:43.032732 24821 ts_tablet_manager.cc:1397] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: Time spent bootstrapping tablet: real 0.015s	user 0.008s	sys 0.004s
I20250114 20:58:43.033164 24822 raft_consensus.cc:357] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.033897 24822 raft_consensus.cc:383] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:43.034170 24822 raft_consensus.cc:738] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 52fd7c3fdc4f44a78590c14cc0b18814, State: Initialized, Role: FOLLOWER
I20250114 20:58:43.034818 24822 consensus_queue.cc:260] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.035281 24821 raft_consensus.cc:357] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.036011 24821 raft_consensus.cc:383] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:43.036290 24821 raft_consensus.cc:738] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d8849e2fa8d14bb6852f6843d40d9c72, State: Initialized, Role: FOLLOWER
I20250114 20:58:43.037103 24822 ts_tablet_manager.cc:1428] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814: Time spent starting tablet: real 0.006s	user 0.005s	sys 0.000s
I20250114 20:58:43.036959 24821 consensus_queue.cc:260] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.038775 24821 ts_tablet_manager.cc:1428] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: Time spent starting tablet: real 0.006s	user 0.006s	sys 0.000s
I20250114 20:58:43.075196 24823 raft_consensus.cc:491] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:43.075666 24823 raft_consensus.cc:513] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.077625 24823 leader_election.cc:290] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759), d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:43.086949 24625 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "aadaffa66a754aeabc6f18152e145030" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" is_pre_election: true
I20250114 20:58:43.087795 24625 raft_consensus.cc:2463] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 0.
I20250114 20:58:43.088725 24774 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "aadaffa66a754aeabc6f18152e145030" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" is_pre_election: true
I20250114 20:58:43.088871 24664 leader_election.cc:304] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, 52fd7c3fdc4f44a78590c14cc0b18814; no voters: 
I20250114 20:58:43.089437 24774 raft_consensus.cc:2463] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 0.
I20250114 20:58:43.089648 24823 raft_consensus.cc:2798] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:43.090018 24823 raft_consensus.cc:491] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:43.090268 24823 raft_consensus.cc:3054] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.094714 24823 raft_consensus.cc:513] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.096089 24823 leader_election.cc:290] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 election: Requested vote from peers 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759), d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:43.096729 24625 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "aadaffa66a754aeabc6f18152e145030" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "52fd7c3fdc4f44a78590c14cc0b18814"
I20250114 20:58:43.096987 24774 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "aadaffa66a754aeabc6f18152e145030" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d8849e2fa8d14bb6852f6843d40d9c72"
I20250114 20:58:43.097182 24625 raft_consensus.cc:3054] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.097469 24774 raft_consensus.cc:3054] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.101588 24625 raft_consensus.cc:2463] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 1.
I20250114 20:58:43.101828 24774 raft_consensus.cc:2463] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 1.
I20250114 20:58:43.102469 24664 leader_election.cc:304] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, 52fd7c3fdc4f44a78590c14cc0b18814; no voters: 
I20250114 20:58:43.103014 24823 raft_consensus.cc:2798] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:43.103937 24823 raft_consensus.cc:695] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Becoming Leader. State: Replica: 2a4adc6752064f6780cf3140807a7720, State: Running, Role: LEADER
I20250114 20:58:43.104617 24823 consensus_queue.cc:237] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.110875 24520 catalog_manager.cc:5526] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 reported cstate change: term changed from 0 to 1, leader changed from <none> to 2a4adc6752064f6780cf3140807a7720 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:43.162662 24828 raft_consensus.cc:491] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:43.163110 24828 raft_consensus.cc:513] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.164968 24828 leader_election.cc:290] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615), d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:43.176085 24774 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "432606a5f90743ceb236987a6ad43e33" candidate_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" is_pre_election: true
I20250114 20:58:43.176532 24700 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "432606a5f90743ceb236987a6ad43e33" candidate_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2a4adc6752064f6780cf3140807a7720" is_pre_election: true
I20250114 20:58:43.176815 24774 raft_consensus.cc:2463] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 52fd7c3fdc4f44a78590c14cc0b18814 in term 0.
I20250114 20:58:43.177206 24700 raft_consensus.cc:2463] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 52fd7c3fdc4f44a78590c14cc0b18814 in term 0.
I20250114 20:58:43.177860 24589 leader_election.cc:304] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 52fd7c3fdc4f44a78590c14cc0b18814, d8849e2fa8d14bb6852f6843d40d9c72; no voters: 
I20250114 20:58:43.178623 24828 raft_consensus.cc:2798] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:43.178946 24828 raft_consensus.cc:491] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:43.179157 24828 raft_consensus.cc:3054] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.183533 24828 raft_consensus.cc:513] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.184944 24828 leader_election.cc:290] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [CANDIDATE]: Term 1 election: Requested vote from peers 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615), d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:43.185592 24700 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "432606a5f90743ceb236987a6ad43e33" candidate_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2a4adc6752064f6780cf3140807a7720"
I20250114 20:58:43.185863 24774 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "432606a5f90743ceb236987a6ad43e33" candidate_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d8849e2fa8d14bb6852f6843d40d9c72"
I20250114 20:58:43.186062 24700 raft_consensus.cc:3054] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.186342 24774 raft_consensus.cc:3054] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.188838 24823 raft_consensus.cc:491] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:43.189291 24823 raft_consensus.cc:513] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.190860 24700 raft_consensus.cc:2463] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 52fd7c3fdc4f44a78590c14cc0b18814 in term 1.
I20250114 20:58:43.191128 24823 leader_election.cc:290] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759), d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:43.191458 24774 raft_consensus.cc:2463] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 52fd7c3fdc4f44a78590c14cc0b18814 in term 1.
I20250114 20:58:43.192157 24588 leader_election.cc:304] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, 52fd7c3fdc4f44a78590c14cc0b18814; no voters: 
I20250114 20:58:43.192382 24773 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0c43bb0ab0b1413a826516f7f1980835" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" is_pre_election: true
I20250114 20:58:43.192860 24828 raft_consensus.cc:2798] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:43.192984 24773 raft_consensus.cc:2463] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 0.
I20250114 20:58:43.193650 24625 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0c43bb0ab0b1413a826516f7f1980835" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" is_pre_election: true
I20250114 20:58:43.193969 24664 leader_election.cc:304] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, d8849e2fa8d14bb6852f6843d40d9c72; no voters: 
I20250114 20:58:43.194317 24828 raft_consensus.cc:695] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 LEADER]: Becoming Leader. State: Replica: 52fd7c3fdc4f44a78590c14cc0b18814, State: Running, Role: LEADER
I20250114 20:58:43.194314 24625 raft_consensus.cc:2463] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 0.
I20250114 20:58:43.194789 24823 raft_consensus.cc:2798] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:43.195155 24823 raft_consensus.cc:491] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:43.195442 24823 raft_consensus.cc:3054] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.195394 24828 consensus_queue.cc:237] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.199288 24826 raft_consensus.cc:491] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:43.199765 24826 raft_consensus.cc:513] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.201220 24823 raft_consensus.cc:513] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.202085 24826 leader_election.cc:290] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615), 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759)
I20250114 20:58:43.203572 24625 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0c43bb0ab0b1413a826516f7f1980835" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "52fd7c3fdc4f44a78590c14cc0b18814"
I20250114 20:58:43.203267 24520 catalog_manager.cc:5526] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 reported cstate change: term changed from 0 to 1, leader changed from <none> to 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129). New cstate: current_term: 1 leader_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:43.204139 24625 raft_consensus.cc:3054] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.204716 24823 leader_election.cc:290] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 election: Requested vote from peers 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759), d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:43.212322 24773 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "0c43bb0ab0b1413a826516f7f1980835" candidate_uuid: "2a4adc6752064f6780cf3140807a7720" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d8849e2fa8d14bb6852f6843d40d9c72"
I20250114 20:58:43.212999 24773 raft_consensus.cc:3054] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.217994 24624 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "10785666cd15467b839a3054a1f7d54f" candidate_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" is_pre_election: true
I20250114 20:58:43.218679 24624 raft_consensus.cc:2463] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d8849e2fa8d14bb6852f6843d40d9c72 in term 0.
I20250114 20:58:43.219678 24738 leader_election.cc:304] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 52fd7c3fdc4f44a78590c14cc0b18814, d8849e2fa8d14bb6852f6843d40d9c72; no voters: 
I20250114 20:58:43.220081 24700 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "10785666cd15467b839a3054a1f7d54f" candidate_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2a4adc6752064f6780cf3140807a7720" is_pre_election: true
I20250114 20:58:43.220535 24826 raft_consensus.cc:2798] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:43.220707 24700 raft_consensus.cc:2463] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d8849e2fa8d14bb6852f6843d40d9c72 in term 0.
I20250114 20:58:43.220918 24826 raft_consensus.cc:491] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:43.221313 24826 raft_consensus.cc:3054] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.223237 24625 raft_consensus.cc:2463] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 1.
I20250114 20:58:43.219571 24773 raft_consensus.cc:2463] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2a4adc6752064f6780cf3140807a7720 in term 1.
I20250114 20:58:43.224308 24664 leader_election.cc:304] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, 52fd7c3fdc4f44a78590c14cc0b18814; no voters: 
I20250114 20:58:43.224912 24823 raft_consensus.cc:2798] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:43.225346 24823 raft_consensus.cc:695] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Becoming Leader. State: Replica: 2a4adc6752064f6780cf3140807a7720, State: Running, Role: LEADER
I20250114 20:58:43.226029 24823 consensus_queue.cc:237] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.226508 24826 raft_consensus.cc:513] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.227917 24826 leader_election.cc:290] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 1 election: Requested vote from peers 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615), 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759)
I20250114 20:58:43.228683 24700 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "10785666cd15467b839a3054a1f7d54f" candidate_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2a4adc6752064f6780cf3140807a7720"
I20250114 20:58:43.228838 24625 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "10785666cd15467b839a3054a1f7d54f" candidate_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "52fd7c3fdc4f44a78590c14cc0b18814"
I20250114 20:58:43.229282 24700 raft_consensus.cc:3054] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.229447 24625 raft_consensus.cc:3054] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:43.234465 24700 raft_consensus.cc:2463] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d8849e2fa8d14bb6852f6843d40d9c72 in term 1.
I20250114 20:58:43.233682 24520 catalog_manager.cc:5526] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 reported cstate change: term changed from 0 to 1, leader changed from <none> to 2a4adc6752064f6780cf3140807a7720 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:43.235446 24737 leader_election.cc:304] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, d8849e2fa8d14bb6852f6843d40d9c72; no voters: 
I20250114 20:58:43.235515 24625 raft_consensus.cc:2463] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d8849e2fa8d14bb6852f6843d40d9c72 in term 1.
I20250114 20:58:43.236186 24826 raft_consensus.cc:2798] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:43.237340 24826 raft_consensus.cc:695] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 LEADER]: Becoming Leader. State: Replica: d8849e2fa8d14bb6852f6843d40d9c72, State: Running, Role: LEADER
I20250114 20:58:43.238189 24826 consensus_queue.cc:237] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:43.245307 24520 catalog_manager.cc:5526] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: term changed from 0 to 1, leader changed from <none> to d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } }
I20250114 20:58:43.308256 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:58:43.323930 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:43.324668 20370 tablet_replica.cc:331] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814: stopping tablet replica
I20250114 20:58:43.325173 20370 raft_consensus.cc:2238] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:43.325531 20370 raft_consensus.cc:2267] T aadaffa66a754aeabc6f18152e145030 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:43.327191 20370 tablet_replica.cc:331] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814: stopping tablet replica
I20250114 20:58:43.327693 20370 raft_consensus.cc:2238] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:43.327997 20370 raft_consensus.cc:2267] T 0c43bb0ab0b1413a826516f7f1980835 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:43.330101 20370 tablet_replica.cc:331] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: stopping tablet replica
I20250114 20:58:43.330796 20370 raft_consensus.cc:2238] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:43.331501 20370 pending_rounds.cc:62] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: Trying to abort 1 pending ops.
I20250114 20:58:43.331825 20370 pending_rounds.cc:69] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294571815587840 op_type: NO_OP noop_request { }
I20250114 20:58:43.332255 20370 raft_consensus.cc:2883] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:58:43.332571 20370 raft_consensus.cc:2267] T 432606a5f90743ceb236987a6ad43e33 P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:43.335078 20370 tablet_replica.cc:331] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814: stopping tablet replica
I20250114 20:58:43.335668 20370 raft_consensus.cc:2238] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:43.336066 20370 raft_consensus.cc:2267] T 10785666cd15467b839a3054a1f7d54f P 52fd7c3fdc4f44a78590c14cc0b18814 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:43.356910 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:58:43.602942 24823 consensus_queue.cc:1035] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250114 20:58:43.618218 24664 consensus_peers.cc:487] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:43.662374 24826 consensus_queue.cc:1035] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250114 20:58:43.675037 24738 consensus_peers.cc:487] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:58:43.702456 24664 consensus_peers.cc:487] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:43.783480 24852 consensus_queue.cc:1035] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:58:44.369728 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:44.375388 24860 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:44.376078 24861 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:44.376686 24863 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:44.377584 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:44.378418 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:44.378621 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:44.378778 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888324378757 us; error 0 us; skew 500 ppm
I20250114 20:58:44.379215 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:44.381628 20370 webserver.cc:458] Webserver started at http://127.19.228.132:34367/ using document root <none> and password file <none>
I20250114 20:58:44.382040 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:44.382205 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:44.382433 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:44.383409 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "01f373f273e44b5391aaef1efadd847d"
format_stamp: "Formatted at 2025-01-14 20:58:44 on dist-test-slave-kc3q"
I20250114 20:58:44.387428 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.003s	sys 0.000s
I20250114 20:58:44.390357 24868 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:44.390995 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:58:44.391242 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "01f373f273e44b5391aaef1efadd847d"
format_stamp: "Formatted at 2025-01-14 20:58:44 on dist-test-slave-kc3q"
I20250114 20:58:44.391489 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.NoRebalancingIfReplicasRecovering.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:44.409410 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:44.410230 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:44.411557 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:44.413574 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:44.413738 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:44.413973 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:44.414108 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
W20250114 20:58:44.416819 24571 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver 52fd7c3fdc4f44a78590c14cc0b18814 not available for placement
I20250114 20:58:44.448251 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:41533
I20250114 20:58:44.448334 24930 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:41533 every 8 connection(s)
I20250114 20:58:44.460772 24931 heartbeater.cc:346] Connected to a master server at 127.19.228.190:41913
I20250114 20:58:44.461071 24931 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:44.461661 24931 heartbeater.cc:510] Master 127.19.228.190:41913 requested a full tablet report, sending...
I20250114 20:58:44.463232 24521 ts_manager.cc:194] Registered new tserver with Master: 01f373f273e44b5391aaef1efadd847d (127.19.228.132:41533)
I20250114 20:58:44.464504 24521 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:35118
I20250114 20:58:44.598032 24848 consensus_queue.cc:579] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.358s)
I20250114 20:58:44.607304 24773 consensus_queue.cc:237] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:44.613292 24700 raft_consensus.cc:1270] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:58:44.614861 24936 consensus_queue.cc:1035] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:58:44.615636 24738 consensus_peers.cc:487] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:44.620103 24935 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:44.621591 24700 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
W20250114 20:58:44.622519 24739 consensus_peers.cc:487] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 01f373f273e44b5391aaef1efadd847d (127.19.228.132:41533): Couldn't send request to peer 01f373f273e44b5391aaef1efadd847d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 10785666cd15467b839a3054a1f7d54f. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:44.626857 24508 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 10785666cd15467b839a3054a1f7d54f with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250114 20:58:44.631090 24521 catalog_manager.cc:5526] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:44.673807 24839 consensus_queue.cc:579] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.568s)
I20250114 20:58:44.682967 24700 consensus_queue.cc:237] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:44.689296 24773 raft_consensus.cc:1270] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:58:44.690639 24946 consensus_queue.cc:1035] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:58:44.692693 24664 consensus_peers.cc:487] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:58:44.696651 24665 consensus_peers.cc:487] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 -> Peer 01f373f273e44b5391aaef1efadd847d (127.19.228.132:41533): Couldn't send request to peer 01f373f273e44b5391aaef1efadd847d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: aadaffa66a754aeabc6f18152e145030. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:44.697409 24839 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:44.698792 24773 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:44.705636 24507 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet aadaffa66a754aeabc6f18152e145030 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250114 20:58:44.709569 24520 catalog_manager.cc:5526] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 reported cstate change: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:44.714594 24935 raft_consensus.cc:491] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:44.714936 24935 raft_consensus.cc:513] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:44.716657 24935 leader_election.cc:290] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759), 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615)
I20250114 20:58:44.718053 24700 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "432606a5f90743ceb236987a6ad43e33" candidate_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2a4adc6752064f6780cf3140807a7720" is_pre_election: true
I20250114 20:58:44.718712 24700 raft_consensus.cc:2463] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d8849e2fa8d14bb6852f6843d40d9c72 in term 1.
I20250114 20:58:44.719722 24737 leader_election.cc:304] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, d8849e2fa8d14bb6852f6843d40d9c72; no voters: 
W20250114 20:58:44.720229 24738 leader_election.cc:336] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
I20250114 20:58:44.720427 24935 raft_consensus.cc:2798] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:58:44.720808 24935 raft_consensus.cc:491] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:44.721124 24935 raft_consensus.cc:3054] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:58:44.725936 24935 raft_consensus.cc:513] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:44.727192 24935 leader_election.cc:290] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 2 election: Requested vote from peers 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759), 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615)
I20250114 20:58:44.728096 24700 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "432606a5f90743ceb236987a6ad43e33" candidate_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2a4adc6752064f6780cf3140807a7720"
I20250114 20:58:44.728545 24700 raft_consensus.cc:3054] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Advancing to term 2
W20250114 20:58:44.729864 24738 leader_election.cc:336] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
I20250114 20:58:44.732507 24700 raft_consensus.cc:2463] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d8849e2fa8d14bb6852f6843d40d9c72 in term 2.
I20250114 20:58:44.733273 24737 leader_election.cc:304] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 2a4adc6752064f6780cf3140807a7720, d8849e2fa8d14bb6852f6843d40d9c72; no voters: 52fd7c3fdc4f44a78590c14cc0b18814
I20250114 20:58:44.733839 24935 raft_consensus.cc:2798] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:58:44.734203 24935 raft_consensus.cc:695] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 LEADER]: Becoming Leader. State: Replica: d8849e2fa8d14bb6852f6843d40d9c72, State: Running, Role: LEADER
I20250114 20:58:44.734786 24935 consensus_queue.cc:237] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } }
I20250114 20:58:44.741122 24521 catalog_manager.cc:5526] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: term changed from 1 to 2, leader changed from 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) to d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131). New cstate: current_term: 2 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } }
I20250114 20:58:44.747833 24946 consensus_queue.cc:579] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.521s)
I20250114 20:58:44.755450 24700 consensus_queue.cc:237] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:44.760155 24773 raft_consensus.cc:1270] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
W20250114 20:58:44.760294 24665 consensus_peers.cc:487] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 -> Peer 01f373f273e44b5391aaef1efadd847d (127.19.228.132:41533): Couldn't send request to peer 01f373f273e44b5391aaef1efadd847d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 0c43bb0ab0b1413a826516f7f1980835. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:44.761322 24945 consensus_queue.cc:1035] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:58:44.762528 24664 consensus_peers.cc:487] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:44.766523 24839 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:44.767822 24773 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:44.772408 24507 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 0c43bb0ab0b1413a826516f7f1980835 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250114 20:58:44.776082 24521 catalog_manager.cc:5526] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 reported cstate change: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:45.224591 24952 ts_tablet_manager.cc:927] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: Initiating tablet copy from peer d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:45.226127 24952 tablet_copy_client.cc:323] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:42017
I20250114 20:58:45.236635 24784 tablet_copy_service.cc:140] P d8849e2fa8d14bb6852f6843d40d9c72: Received BeginTabletCopySession request for tablet 10785666cd15467b839a3054a1f7d54f from peer 01f373f273e44b5391aaef1efadd847d ({username='slave'} at 127.0.0.1:41466)
I20250114 20:58:45.237108 24784 tablet_copy_service.cc:161] P d8849e2fa8d14bb6852f6843d40d9c72: Beginning new tablet copy session on tablet 10785666cd15467b839a3054a1f7d54f from peer 01f373f273e44b5391aaef1efadd847d at {username='slave'} at 127.0.0.1:41466: session id = 01f373f273e44b5391aaef1efadd847d-10785666cd15467b839a3054a1f7d54f
I20250114 20:58:45.243724 24784 tablet_copy_source_session.cc:215] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:58:45.246268 24952 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 10785666cd15467b839a3054a1f7d54f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:45.257650 24952 tablet_copy_client.cc:806] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 0 data blocks...
I20250114 20:58:45.258212 24952 tablet_copy_client.cc:670] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:58:45.261205 24952 tablet_copy_client.cc:538] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:58:45.266433 24952 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: Bootstrap starting.
I20250114 20:58:45.270313 24700 raft_consensus.cc:1270] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 1. (index mismatch)
I20250114 20:58:45.272028 24935 consensus_queue.cc:1035] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:58:45.274541 24956 ts_tablet_manager.cc:927] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: Initiating tablet copy from peer 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615)
I20250114 20:58:45.276715 24956 tablet_copy_client.cc:323] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:36615
I20250114 20:58:45.291225 24952 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:58:45.291255 24710 tablet_copy_service.cc:140] P 2a4adc6752064f6780cf3140807a7720: Received BeginTabletCopySession request for tablet aadaffa66a754aeabc6f18152e145030 from peer 01f373f273e44b5391aaef1efadd847d ({username='slave'} at 127.0.0.1:48426)
I20250114 20:58:45.292106 24710 tablet_copy_service.cc:161] P 2a4adc6752064f6780cf3140807a7720: Beginning new tablet copy session on tablet aadaffa66a754aeabc6f18152e145030 from peer 01f373f273e44b5391aaef1efadd847d at {username='slave'} at 127.0.0.1:48426: session id = 01f373f273e44b5391aaef1efadd847d-aadaffa66a754aeabc6f18152e145030
I20250114 20:58:45.292565 24952 tablet_bootstrap.cc:492] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: Bootstrap complete.
I20250114 20:58:45.293440 24952 ts_tablet_manager.cc:1397] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: Time spent bootstrapping tablet: real 0.027s	user 0.011s	sys 0.017s
I20250114 20:58:45.296509 24952 raft_consensus.cc:357] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.297590 24952 raft_consensus.cc:738] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 01f373f273e44b5391aaef1efadd847d, State: Initialized, Role: LEARNER
I20250114 20:58:45.298332 24952 consensus_queue.cc:260] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.304049 24710 tablet_copy_source_session.cc:215] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:58:45.307572 24956 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet aadaffa66a754aeabc6f18152e145030. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:45.307660 24952 ts_tablet_manager.cc:1428] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: Time spent starting tablet: real 0.014s	user 0.004s	sys 0.010s
I20250114 20:58:45.308692 24931 heartbeater.cc:502] Master 127.19.228.190:41913 was elected leader, sending a full tablet report...
I20250114 20:58:45.310106 24784 tablet_copy_service.cc:342] P d8849e2fa8d14bb6852f6843d40d9c72: Request end of tablet copy session 01f373f273e44b5391aaef1efadd847d-10785666cd15467b839a3054a1f7d54f received from {username='slave'} at 127.0.0.1:41466
I20250114 20:58:45.310575 24784 tablet_copy_service.cc:434] P d8849e2fa8d14bb6852f6843d40d9c72: ending tablet copy session 01f373f273e44b5391aaef1efadd847d-10785666cd15467b839a3054a1f7d54f on tablet 10785666cd15467b839a3054a1f7d54f with peer 01f373f273e44b5391aaef1efadd847d
W20250114 20:58:45.314615 24738 consensus_peers.cc:487] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:45.321514 24956 tablet_copy_client.cc:806] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 0 data blocks...
I20250114 20:58:45.321986 24956 tablet_copy_client.cc:670] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:58:45.324743 24956 tablet_copy_client.cc:538] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:58:45.331260 24956 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: Bootstrap starting.
I20250114 20:58:45.345934 24956 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:58:45.346534 24956 tablet_bootstrap.cc:492] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: Bootstrap complete.
I20250114 20:58:45.346974 24956 ts_tablet_manager.cc:1397] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: Time spent bootstrapping tablet: real 0.016s	user 0.016s	sys 0.000s
I20250114 20:58:45.348817 24956 raft_consensus.cc:357] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.349447 24956 raft_consensus.cc:738] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 01f373f273e44b5391aaef1efadd847d, State: Initialized, Role: LEARNER
I20250114 20:58:45.349916 24956 consensus_queue.cc:260] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.350859 24952 ts_tablet_manager.cc:927] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: Initiating tablet copy from peer 2a4adc6752064f6780cf3140807a7720 (127.19.228.130:36615)
I20250114 20:58:45.351819 24956 ts_tablet_manager.cc:1428] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: Time spent starting tablet: real 0.005s	user 0.008s	sys 0.000s
I20250114 20:58:45.352697 24952 tablet_copy_client.cc:323] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:36615
I20250114 20:58:45.353660 24710 tablet_copy_service.cc:342] P 2a4adc6752064f6780cf3140807a7720: Request end of tablet copy session 01f373f273e44b5391aaef1efadd847d-aadaffa66a754aeabc6f18152e145030 received from {username='slave'} at 127.0.0.1:48426
I20250114 20:58:45.354080 24710 tablet_copy_service.cc:434] P 2a4adc6752064f6780cf3140807a7720: ending tablet copy session 01f373f273e44b5391aaef1efadd847d-aadaffa66a754aeabc6f18152e145030 on tablet aadaffa66a754aeabc6f18152e145030 with peer 01f373f273e44b5391aaef1efadd847d
I20250114 20:58:45.354429 24709 tablet_copy_service.cc:140] P 2a4adc6752064f6780cf3140807a7720: Received BeginTabletCopySession request for tablet 0c43bb0ab0b1413a826516f7f1980835 from peer 01f373f273e44b5391aaef1efadd847d ({username='slave'} at 127.0.0.1:48426)
I20250114 20:58:45.354955 24709 tablet_copy_service.cc:161] P 2a4adc6752064f6780cf3140807a7720: Beginning new tablet copy session on tablet 0c43bb0ab0b1413a826516f7f1980835 from peer 01f373f273e44b5391aaef1efadd847d at {username='slave'} at 127.0.0.1:48426: session id = 01f373f273e44b5391aaef1efadd847d-0c43bb0ab0b1413a826516f7f1980835
I20250114 20:58:45.361933 24709 tablet_copy_source_session.cc:215] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:58:45.364048 24952 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0c43bb0ab0b1413a826516f7f1980835. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:45.373658 24952 tablet_copy_client.cc:806] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 0 data blocks...
I20250114 20:58:45.374118 24952 tablet_copy_client.cc:670] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:58:45.376871 24952 tablet_copy_client.cc:538] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:58:45.382155 24952 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: Bootstrap starting.
I20250114 20:58:45.395217 24952 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:58:45.395834 24952 tablet_bootstrap.cc:492] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: Bootstrap complete.
I20250114 20:58:45.396292 24952 ts_tablet_manager.cc:1397] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: Time spent bootstrapping tablet: real 0.014s	user 0.012s	sys 0.004s
I20250114 20:58:45.398067 24952 raft_consensus.cc:357] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.398612 24952 raft_consensus.cc:738] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 01f373f273e44b5391aaef1efadd847d, State: Initialized, Role: LEARNER
I20250114 20:58:45.399041 24952 consensus_queue.cc:260] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.400936 24952 ts_tablet_manager.cc:1428] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:58:45.402410 24709 tablet_copy_service.cc:342] P 2a4adc6752064f6780cf3140807a7720: Request end of tablet copy session 01f373f273e44b5391aaef1efadd847d-0c43bb0ab0b1413a826516f7f1980835 received from {username='slave'} at 127.0.0.1:48426
I20250114 20:58:45.402695 24709 tablet_copy_service.cc:434] P 2a4adc6752064f6780cf3140807a7720: ending tablet copy session 01f373f273e44b5391aaef1efadd847d-0c43bb0ab0b1413a826516f7f1980835 on tablet 0c43bb0ab0b1413a826516f7f1980835 with peer 01f373f273e44b5391aaef1efadd847d
W20250114 20:58:45.418709 24571 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver 52fd7c3fdc4f44a78590c14cc0b18814 not available for placement
I20250114 20:58:45.651523 24906 raft_consensus.cc:1212] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Deduplicated request from leader. Original: 1.1->[1.2-1.2]   Dedup: 1.2->[]
I20250114 20:58:45.664192 24906 raft_consensus.cc:1212] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Deduplicated request from leader. Original: 1.1->[1.2-1.2]   Dedup: 1.2->[]
I20250114 20:58:45.729844 24906 raft_consensus.cc:1212] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Deduplicated request from leader. Original: 1.1->[1.2-1.2]   Dedup: 1.2->[]
I20250114 20:58:45.750080 24961 consensus_queue.cc:579] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.141s)
I20250114 20:58:45.772114 24970 consensus_queue.cc:579] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.016s)
I20250114 20:58:45.847307 24961 consensus_queue.cc:579] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.111s)
I20250114 20:58:45.856119 24773 consensus_queue.cc:237] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 2.1, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:45.861153 24700 raft_consensus.cc:1270] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 2 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
W20250114 20:58:45.861729 24739 consensus_peers.cc:487] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 01f373f273e44b5391aaef1efadd847d (127.19.228.132:41533): Couldn't send request to peer 01f373f273e44b5391aaef1efadd847d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 432606a5f90743ceb236987a6ad43e33. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:45.862332 24968 consensus_queue.cc:1035] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:58:45.863123 24738 consensus_peers.cc:487] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:45.866961 24935 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 LEADER]: Committing config change with OpId 2.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:45.868330 24700 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Committing config change with OpId 2.2: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } } }
I20250114 20:58:45.874244 24508 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 432606a5f90743ceb236987a6ad43e33 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250114 20:58:45.877916 24520 catalog_manager.cc:5526] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: config changed from index -1 to 2, NON_VOTER 01f373f273e44b5391aaef1efadd847d (127.19.228.132) added. New cstate: current_term: 2 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:46.135042 24970 raft_consensus.cc:1059] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: attempting to promote NON_VOTER 01f373f273e44b5391aaef1efadd847d to VOTER
I20250114 20:58:46.136796 24970 consensus_queue.cc:237] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:46.142325 24906 raft_consensus.cc:1270] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
W20250114 20:58:46.143105 24664 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111) [suppressed 24 similar messages]
I20250114 20:58:46.143764 24773 raft_consensus.cc:1270] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:46.143507 24979 consensus_queue.cc:1035] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:46.145466 24979 consensus_queue.cc:1035] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
W20250114 20:58:46.146023 24664 consensus_peers.cc:487] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:46.151947 24970 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.153225 24773 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.159945 24906 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.161701 24521 catalog_manager.cc:5526] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 reported cstate change: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250114 20:58:46.174306 24700 consensus_queue.cc:237] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:46.178622 24906 raft_consensus.cc:1270] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:46.179421 24773 raft_consensus.cc:1270] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:46.180163 24839 consensus_queue.cc:1035] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.001s
I20250114 20:58:46.181519 24979 consensus_queue.cc:1035] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:46.186877 24839 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.189661 24773 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.190048 24905 raft_consensus.cc:2949] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.199901 24507 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet aadaffa66a754aeabc6f18152e145030 with cas_config_opid_index 3: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:58:46.199324 24523 catalog_manager.cc:5526] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
W20250114 20:58:46.215345 24508 catalog_manager.cc:4615] TS 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): DeleteTablet:TABLET_DATA_TOMBSTONED RPC failed for tablet aadaffa66a754aeabc6f18152e145030: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
I20250114 20:58:46.254603 24970 raft_consensus.cc:1059] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: attempting to promote NON_VOTER 01f373f273e44b5391aaef1efadd847d to VOTER
I20250114 20:58:46.256812 24970 consensus_queue.cc:237] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:46.262295 24773 raft_consensus.cc:1270] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:46.262499 24905 raft_consensus.cc:1270] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:46.263644 24979 consensus_queue.cc:1035] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:46.264498 24969 consensus_queue.cc:1035] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.001s
W20250114 20:58:46.265269 24664 consensus_peers.cc:487] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:46.267016 24973 raft_consensus.cc:1059] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: attempting to promote NON_VOTER 01f373f273e44b5391aaef1efadd847d to VOTER
I20250114 20:58:46.269620 24973 consensus_queue.cc:237] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:46.276458 24905 raft_consensus.cc:1270] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 LEARNER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:46.278251 24968 consensus_queue.cc:1035] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
W20250114 20:58:46.281424 24738 consensus_peers.cc:487] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:46.281769 24979 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.283491 24700 raft_consensus.cc:1270] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:58:46.283465 24773 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.285097 24973 consensus_queue.cc:1035] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:46.294162 24906 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.298228 24520 catalog_manager.cc:5526] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.301406 24968 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.306708 24905 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.311149 24700 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.312976 24699 consensus_queue.cc:237] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:46.315503 24523 catalog_manager.cc:5526] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250114 20:58:46.319420 24773 raft_consensus.cc:1270] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:46.320676 24906 raft_consensus.cc:1270] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Refusing update from remote peer 2a4adc6752064f6780cf3140807a7720: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:46.320991 24979 consensus_queue.cc:1035] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:46.322059 24969 consensus_queue.cc:1035] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:46.326825 24979 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.331807 24906 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.332182 24774 consensus_queue.cc:237] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:46.329205 24773 raft_consensus.cc:2949] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.339200 24699 raft_consensus.cc:1270] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:46.342471 24935 consensus_queue.cc:1035] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:46.345026 24507 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 0c43bb0ab0b1413a826516f7f1980835 with cas_config_opid_index 3: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:58:46.348920 24906 raft_consensus.cc:1270] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:58:46.348984 24935 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.350576 24973 consensus_queue.cc:1035] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:46.350286 24521 catalog_manager.cc:5526] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d reported cstate change: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New cstate: current_term: 1 leader_uuid: "2a4adc6752064f6780cf3140807a7720" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.354779 24699 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.357159 24508 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 10785666cd15467b839a3054a1f7d54f with cas_config_opid_index 3: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:58:46.356626 24906 raft_consensus.cc:2949] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:46.370553 24520 catalog_manager.cc:5526] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 reported cstate change: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New cstate: current_term: 1 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:46.373220 24990 ts_tablet_manager.cc:927] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: Initiating tablet copy from peer d8849e2fa8d14bb6852f6843d40d9c72 (127.19.228.131:42017)
I20250114 20:58:46.378302 24990 tablet_copy_client.cc:323] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:42017
I20250114 20:58:46.379945 24784 tablet_copy_service.cc:140] P d8849e2fa8d14bb6852f6843d40d9c72: Received BeginTabletCopySession request for tablet 432606a5f90743ceb236987a6ad43e33 from peer 01f373f273e44b5391aaef1efadd847d ({username='slave'} at 127.0.0.1:41466)
I20250114 20:58:46.380467 24784 tablet_copy_service.cc:161] P d8849e2fa8d14bb6852f6843d40d9c72: Beginning new tablet copy session on tablet 432606a5f90743ceb236987a6ad43e33 from peer 01f373f273e44b5391aaef1efadd847d at {username='slave'} at 127.0.0.1:41466: session id = 01f373f273e44b5391aaef1efadd847d-432606a5f90743ceb236987a6ad43e33
I20250114 20:58:46.386178 24784 tablet_copy_source_session.cc:215] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:58:46.388372 24990 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 432606a5f90743ceb236987a6ad43e33. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:46.397500 24990 tablet_copy_client.cc:806] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 0 data blocks...
I20250114 20:58:46.397883 24990 tablet_copy_client.cc:670] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:58:46.400580 24990 tablet_copy_client.cc:538] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:58:46.405858 24990 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: Bootstrap starting.
I20250114 20:58:46.418679 24990 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:58:46.419384 24990 tablet_bootstrap.cc:492] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: Bootstrap complete.
I20250114 20:58:46.420042 24990 ts_tablet_manager.cc:1397] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: Time spent bootstrapping tablet: real 0.014s	user 0.012s	sys 0.004s
W20250114 20:58:46.422775 24571 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver 52fd7c3fdc4f44a78590c14cc0b18814 not available for placement
I20250114 20:58:46.422526 24990 raft_consensus.cc:357] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:46.423290 24990 raft_consensus.cc:738] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 01f373f273e44b5391aaef1efadd847d, State: Initialized, Role: LEARNER
I20250114 20:58:46.423794 24990 consensus_queue.cc:260] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: true } }
I20250114 20:58:46.425727 24990 ts_tablet_manager.cc:1428] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:58:46.427103 24784 tablet_copy_service.cc:342] P d8849e2fa8d14bb6852f6843d40d9c72: Request end of tablet copy session 01f373f273e44b5391aaef1efadd847d-432606a5f90743ceb236987a6ad43e33 received from {username='slave'} at 127.0.0.1:41466
I20250114 20:58:46.427389 24784 tablet_copy_service.cc:434] P d8849e2fa8d14bb6852f6843d40d9c72: ending tablet copy session 01f373f273e44b5391aaef1efadd847d-432606a5f90743ceb236987a6ad43e33 on tablet 432606a5f90743ceb236987a6ad43e33 with peer 01f373f273e44b5391aaef1efadd847d
I20250114 20:58:46.823310 24906 raft_consensus.cc:1212] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.1->[2.2-2.2]   Dedup: 2.2->[]
I20250114 20:58:47.001447 24961 consensus_queue.cc:579] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Leader has been unable to successfully communicate with peer 52fd7c3fdc4f44a78590c14cc0b18814 for more than 1 seconds (1.144s)
I20250114 20:58:47.375093 24973 raft_consensus.cc:1059] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: attempting to promote NON_VOTER 01f373f273e44b5391aaef1efadd847d to VOTER
I20250114 20:58:47.376804 24973 consensus_queue.cc:237] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 0, Current term: 2, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:47.382325 24699 raft_consensus.cc:1270] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:58:47.382594 24906 raft_consensus.cc:1270] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 LEARNER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:58:47.383960 24973 consensus_queue.cc:1035] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:58:47.384541 24996 consensus_queue.cc:1035] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
W20250114 20:58:47.386067 24738 consensus_peers.cc:487] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 -> Peer 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): Couldn't send request to peer 52fd7c3fdc4f44a78590c14cc0b18814. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:58:47.394938 24996 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 LEADER]: Committing config change with OpId 2.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.396322 24699 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.402915 24906 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.405738 24521 catalog_manager.cc:5526] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 reported cstate change: config changed from index 2 to 3, 01f373f273e44b5391aaef1efadd847d (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "52fd7c3fdc4f44a78590c14cc0b18814" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 40759 } } peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.421051 24774 consensus_queue.cc:237] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 2.3, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } }
I20250114 20:58:47.425635 24699 raft_consensus.cc:1270] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
I20250114 20:58:47.425673 24906 raft_consensus.cc:1270] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 FOLLOWER]: Refusing update from remote peer d8849e2fa8d14bb6852f6843d40d9c72: Log matching property violated. Preceding OpId in replica: term: 2 index: 3. Preceding OpId from leader: term: 2 index: 4. (index mismatch)
W20250114 20:58:47.426813 24571 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver 52fd7c3fdc4f44a78590c14cc0b18814 not available for placement
I20250114 20:58:47.427003 24973 consensus_queue.cc:1035] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:47.427711 24968 consensus_queue.cc:1035] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:58:47.433063 24961 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 LEADER]: Committing config change with OpId 2.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.434409 24906 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.434587 24699 raft_consensus.cc:2949] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Committing config change with OpId 2.4: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
I20250114 20:58:47.441465 24508 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 432606a5f90743ceb236987a6ad43e33 with cas_config_opid_index 3: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:58:47.443965 24523 catalog_manager.cc:5526] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d reported cstate change: config changed from index 3 to 4, VOTER 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129) evicted. New cstate: current_term: 2 leader_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "2a4adc6752064f6780cf3140807a7720" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 36615 } } peers { permanent_uuid: "d8849e2fa8d14bb6852f6843d40d9c72" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 42017 } } peers { permanent_uuid: "01f373f273e44b5391aaef1efadd847d" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 41533 } attrs { promote: false } } }
W20250114 20:58:47.455332 24508 catalog_manager.cc:4615] TS 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): DeleteTablet:TABLET_DATA_TOMBSTONED RPC failed for tablet 432606a5f90743ceb236987a6ad43e33: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
W20250114 20:58:48.497246 24508 catalog_manager.cc:4615] TS 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): DeleteTablet:TABLET_DATA_TOMBSTONED RPC failed for tablet aadaffa66a754aeabc6f18152e145030: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
W20250114 20:58:49.691979 24508 catalog_manager.cc:4615] TS 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): DeleteTablet:TABLET_DATA_TOMBSTONED RPC failed for tablet 432606a5f90743ceb236987a6ad43e33: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
I20250114 20:58:51.481746 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:58:51.501274 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:51.501956 20370 tablet_replica.cc:331] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720: stopping tablet replica
I20250114 20:58:51.502516 20370 raft_consensus.cc:2238] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.503034 20370 raft_consensus.cc:2267] T 432606a5f90743ceb236987a6ad43e33 P 2a4adc6752064f6780cf3140807a7720 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.505947 20370 tablet_replica.cc:331] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720: stopping tablet replica
I20250114 20:58:51.506410 20370 raft_consensus.cc:2238] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:51.507072 20370 raft_consensus.cc:2267] T 0c43bb0ab0b1413a826516f7f1980835 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.508759 20370 tablet_replica.cc:331] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720: stopping tablet replica
I20250114 20:58:51.509229 20370 raft_consensus.cc:2238] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:51.509889 20370 raft_consensus.cc:2267] T aadaffa66a754aeabc6f18152e145030 P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.511876 20370 tablet_replica.cc:331] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720: stopping tablet replica
I20250114 20:58:51.512321 20370 raft_consensus.cc:2238] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.512807 20370 raft_consensus.cc:2267] T 10785666cd15467b839a3054a1f7d54f P 2a4adc6752064f6780cf3140807a7720 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.535039 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:58:51.547942 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:58:51.567410 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:51.568174 20370 tablet_replica.cc:331] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72: stopping tablet replica
I20250114 20:58:51.568742 20370 raft_consensus.cc:2238] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:51.569465 20370 raft_consensus.cc:2267] T 10785666cd15467b839a3054a1f7d54f P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.571336 20370 tablet_replica.cc:331] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72: stopping tablet replica
I20250114 20:58:51.571822 20370 raft_consensus.cc:2238] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:58:51.572500 20370 raft_consensus.cc:2267] T 432606a5f90743ceb236987a6ad43e33 P d8849e2fa8d14bb6852f6843d40d9c72 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.574219 20370 tablet_replica.cc:331] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72: stopping tablet replica
I20250114 20:58:51.574666 20370 raft_consensus.cc:2238] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.575160 20370 raft_consensus.cc:2267] T 0c43bb0ab0b1413a826516f7f1980835 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.576817 20370 tablet_replica.cc:331] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72: stopping tablet replica
I20250114 20:58:51.577243 20370 raft_consensus.cc:2238] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.577719 20370 raft_consensus.cc:2267] T aadaffa66a754aeabc6f18152e145030 P d8849e2fa8d14bb6852f6843d40d9c72 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.599458 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:58:51.612726 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:58:51.630648 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:51.631260 20370 tablet_replica.cc:331] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d: stopping tablet replica
I20250114 20:58:51.631809 20370 raft_consensus.cc:2238] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.632265 20370 raft_consensus.cc:2267] T 432606a5f90743ceb236987a6ad43e33 P 01f373f273e44b5391aaef1efadd847d [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.633832 20370 tablet_replica.cc:331] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d: stopping tablet replica
I20250114 20:58:51.634260 20370 raft_consensus.cc:2238] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.634670 20370 raft_consensus.cc:2267] T 0c43bb0ab0b1413a826516f7f1980835 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.636215 20370 tablet_replica.cc:331] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d: stopping tablet replica
I20250114 20:58:51.636613 20370 raft_consensus.cc:2238] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.637001 20370 raft_consensus.cc:2267] T aadaffa66a754aeabc6f18152e145030 P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.638494 20370 tablet_replica.cc:331] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d: stopping tablet replica
I20250114 20:58:51.638882 20370 raft_consensus.cc:2238] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:51.639297 20370 raft_consensus.cc:2267] T 10785666cd15467b839a3054a1f7d54f P 01f373f273e44b5391aaef1efadd847d [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:51.659879 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
I20250114 20:58:51.670982 20370 master.cc:537] Master@127.19.228.190:41913 shutting down...
W20250114 20:58:51.773867 24508 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111) [suppressed 42 similar messages]
W20250114 20:58:51.776571 24508 catalog_manager.cc:4615] TS 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): DeleteTablet:TABLET_DATA_TOMBSTONED RPC failed for tablet 432606a5f90743ceb236987a6ad43e33: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
W20250114 20:58:54.669947 24508 catalog_manager.cc:4615] TS 52fd7c3fdc4f44a78590c14cc0b18814 (127.19.228.129:40759): DeleteTablet:TABLET_DATA_TOMBSTONED RPC failed for tablet aadaffa66a754aeabc6f18152e145030: Network error: Client connection negotiation failed: client connection to 127.19.228.129:40759: connect: Connection refused (error 111)
I20250114 20:58:55.277977 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:55.279521 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:55.279923 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 24eebd20cc4a4fa08e4acc9da426bf70: stopping tablet replica
I20250114 20:58:55.309418 20370 master.cc:559] Master@127.19.228.190:41913 shutdown complete.
[       OK ] AutoRebalancerTest.NoRebalancingIfReplicasRecovering (14070 ms)
[ RUN      ] AutoRebalancerTest.TestHandlingFailedTservers
I20250114 20:58:55.352717 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:33749
I20250114 20:58:55.353811 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:55.359828 25015 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.360456 25016 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.360879 25018 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:55.361461 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:55.362119 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:55.362308 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:55.362450 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888335362433 us; error 0 us; skew 500 ppm
I20250114 20:58:55.362915 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:55.369285 20370 webserver.cc:458] Webserver started at http://127.19.228.190:32775/ using document root <none> and password file <none>
I20250114 20:58:55.369741 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:55.369903 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:55.370143 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:55.371376 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "81f6f1a81b0441b69e101c4242e5f147"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.375958 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.001s	sys 0.002s
I20250114 20:58:55.378954 25023 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.379711 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:58:55.379928 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "81f6f1a81b0441b69e101c4242e5f147"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.380159 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:55.397797 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:55.398914 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:55.434201 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:33749
I20250114 20:58:55.434283 25074 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:33749 every 8 connection(s)
I20250114 20:58:55.437992 25075 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:55.449074 25075 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147: Bootstrap starting.
I20250114 20:58:55.453399 25075 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:55.457173 25075 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147: No bootstrap required, opened a new log
I20250114 20:58:55.459115 25075 raft_consensus.cc:357] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "81f6f1a81b0441b69e101c4242e5f147" member_type: VOTER }
I20250114 20:58:55.459604 25075 raft_consensus.cc:383] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:55.459815 25075 raft_consensus.cc:738] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 81f6f1a81b0441b69e101c4242e5f147, State: Initialized, Role: FOLLOWER
I20250114 20:58:55.460330 25075 consensus_queue.cc:260] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "81f6f1a81b0441b69e101c4242e5f147" member_type: VOTER }
I20250114 20:58:55.460760 25075 raft_consensus.cc:397] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:58:55.460963 25075 raft_consensus.cc:491] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:58:55.461205 25075 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:55.465500 25075 raft_consensus.cc:513] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "81f6f1a81b0441b69e101c4242e5f147" member_type: VOTER }
I20250114 20:58:55.465960 25075 leader_election.cc:304] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 81f6f1a81b0441b69e101c4242e5f147; no voters: 
I20250114 20:58:55.466951 25075 leader_election.cc:290] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:58:55.467306 25078 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:55.468495 25078 raft_consensus.cc:695] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 1 LEADER]: Becoming Leader. State: Replica: 81f6f1a81b0441b69e101c4242e5f147, State: Running, Role: LEADER
I20250114 20:58:55.469127 25078 consensus_queue.cc:237] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "81f6f1a81b0441b69e101c4242e5f147" member_type: VOTER }
I20250114 20:58:55.469664 25075 sys_catalog.cc:564] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:58:55.472064 25079 sys_catalog.cc:455] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "81f6f1a81b0441b69e101c4242e5f147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "81f6f1a81b0441b69e101c4242e5f147" member_type: VOTER } }
I20250114 20:58:55.472764 25079 sys_catalog.cc:458] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:55.472087 25080 sys_catalog.cc:455] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 81f6f1a81b0441b69e101c4242e5f147. Latest consensus state: current_term: 1 leader_uuid: "81f6f1a81b0441b69e101c4242e5f147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "81f6f1a81b0441b69e101c4242e5f147" member_type: VOTER } }
I20250114 20:58:55.473739 25080 sys_catalog.cc:458] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [sys.catalog]: This master's current role is: LEADER
I20250114 20:58:55.475765 25083 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:58:55.480314 25083 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:58:55.484932 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:58:55.488843 25083 catalog_manager.cc:1348] Generated new cluster ID: 5946d47d6f624c09b5f0feb78c98d4c9
I20250114 20:58:55.489085 25083 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:58:55.504192 25083 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:58:55.505359 25083 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:58:55.523761 25083 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147: Generated new TSK 0
I20250114 20:58:55.524305 25083 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:58:55.551963 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:55.558161 25096 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.558885 25097 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.559783 25099 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:55.560385 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:55.561108 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:55.561290 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:55.561431 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888335561416 us; error 0 us; skew 500 ppm
I20250114 20:58:55.561889 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:55.567962 20370 webserver.cc:458] Webserver started at http://127.19.228.129:34919/ using document root <none> and password file <none>
I20250114 20:58:55.568388 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:55.568548 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:55.568786 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:55.569903 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.574270 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.003s	sys 0.003s
I20250114 20:58:55.577296 25104 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.577997 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.000s
I20250114 20:58:55.578292 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.578642 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:55.601764 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:55.602876 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:55.604550 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:55.606762 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:55.606928 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.607148 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:55.607297 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.644191 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:36371
I20250114 20:58:55.644277 25166 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:36371 every 8 connection(s)
I20250114 20:58:55.648557 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:55.656102 25171 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.657104 25172 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.659873 25174 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:55.660769 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:55.661226 25167 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
I20250114 20:58:55.661605 25167 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:55.661697 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:55.661971 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:55.662158 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888335662138 us; error 0 us; skew 500 ppm
I20250114 20:58:55.662423 25167 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:58:55.662833 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:55.664714 25040 ts_manager.cc:194] Registered new tserver with Master: fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371)
I20250114 20:58:55.665544 20370 webserver.cc:458] Webserver started at http://127.19.228.130:33447/ using document root <none> and password file <none>
I20250114 20:58:55.666169 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:55.666393 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:55.666683 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:55.667189 25040 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:41670
I20250114 20:58:55.668243 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "a8c725dc9d934a6588b141dced4e3582"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.672724 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:58:55.675752 25179 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.676421 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:58:55.676661 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "a8c725dc9d934a6588b141dced4e3582"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.676920 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:55.691334 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:55.692672 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:55.694465 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:55.697154 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:55.697399 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.697682 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:55.697876 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.749749 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:39243
I20250114 20:58:55.749833 25241 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:39243 every 8 connection(s)
I20250114 20:58:55.754052 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:55.760439 25245 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.761327 25246 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:55.763597 25248 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:55.763996 25242 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
I20250114 20:58:55.764321 25242 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:55.764974 25242 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:58:55.766633 25040 ts_manager.cc:194] Registered new tserver with Master: a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:58:55.767971 25040 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:41686
I20250114 20:58:55.768374 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:55.769198 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:55.769389 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:55.769548 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888335769532 us; error 0 us; skew 500 ppm
I20250114 20:58:55.770033 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:55.772154 20370 webserver.cc:458] Webserver started at http://127.19.228.131:38729/ using document root <none> and password file <none>
I20250114 20:58:55.772563 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:55.772728 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:55.772958 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:55.774013 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "671e2dda2f484138aa8edc159d1ae0ca"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.778020 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:58:55.781051 25253 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.781705 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.001s	sys 0.000s
I20250114 20:58:55.781945 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "671e2dda2f484138aa8edc159d1ae0ca"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:58:55.782193 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:55.806206 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:55.807324 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:55.808670 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:55.810722 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:55.810905 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.811105 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:55.811247 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:55.848209 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:37755
I20250114 20:58:55.848315 25315 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:37755 every 8 connection(s)
I20250114 20:58:55.860203 25316 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
I20250114 20:58:55.860546 25316 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:55.861222 25316 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:58:55.862942 25040 ts_manager.cc:194] Registered new tserver with Master: 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:55.863152 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012025694s
I20250114 20:58:55.864266 25040 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:41692
I20250114 20:58:56.669981 25167 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
I20250114 20:58:56.770201 25242 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
I20250114 20:58:56.866878 25316 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
I20250114 20:58:56.894901 20370 test_util.cc:274] Using random seed: -732574816
I20250114 20:58:56.915163 25040 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:37268:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_""\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:58:56.917441 25040 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:58:56.974589 25132 tablet_service.cc:1467] Processing CreateTablet for tablet 7a28538d10344370994db07360ffc8bd (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:58:56.975699 25207 tablet_service.cc:1467] Processing CreateTablet for tablet 7a28538d10344370994db07360ffc8bd (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:58:56.975917 25131 tablet_service.cc:1467] Processing CreateTablet for tablet 81b27eb642254a36a085d92231e1620f (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:58:56.976953 25207 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7a28538d10344370994db07360ffc8bd. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.976009 25132 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7a28538d10344370994db07360ffc8bd. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.978034 25130 tablet_service.cc:1467] Processing CreateTablet for tablet 3ceb4d4fab52486690cf03f728674a66 (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:58:56.978794 25280 tablet_service.cc:1467] Processing CreateTablet for tablet 81b27eb642254a36a085d92231e1620f (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:58:56.980067 25280 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 81b27eb642254a36a085d92231e1620f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.980762 25129 tablet_service.cc:1467] Processing CreateTablet for tablet 45c927b20e4641ea9e74d27149487061 (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:58:56.979342 25130 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3ceb4d4fab52486690cf03f728674a66. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.982110 25204 tablet_service.cc:1467] Processing CreateTablet for tablet 45c927b20e4641ea9e74d27149487061 (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:58:56.979311 25278 tablet_service.cc:1467] Processing CreateTablet for tablet 45c927b20e4641ea9e74d27149487061 (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:58:56.983354 25204 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 45c927b20e4641ea9e74d27149487061. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.988116 25279 tablet_service.cc:1467] Processing CreateTablet for tablet 3ceb4d4fab52486690cf03f728674a66 (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:58:56.984009 25278 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 45c927b20e4641ea9e74d27149487061. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.984503 25131 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 81b27eb642254a36a085d92231e1620f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.984673 25205 tablet_service.cc:1467] Processing CreateTablet for tablet 3ceb4d4fab52486690cf03f728674a66 (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:58:56.991786 25205 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3ceb4d4fab52486690cf03f728674a66. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.992277 25129 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 45c927b20e4641ea9e74d27149487061. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.995057 25279 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3ceb4d4fab52486690cf03f728674a66. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.986287 25206 tablet_service.cc:1467] Processing CreateTablet for tablet 81b27eb642254a36a085d92231e1620f (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:58:56.997551 25206 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 81b27eb642254a36a085d92231e1620f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:56.988021 25281 tablet_service.cc:1467] Processing CreateTablet for tablet 7a28538d10344370994db07360ffc8bd (DEFAULT_TABLE table=test-workload [id=bc696e3b35a24f1c9796f8054abefcd9]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:58:57.002912 25281 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 7a28538d10344370994db07360ffc8bd. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:58:57.050459 25337 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:58:57.054592 25336 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:58:57.056385 25338 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:58:57.057410 25337 tablet_bootstrap.cc:654] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.062315 25338 tablet_bootstrap.cc:654] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.065066 25336 tablet_bootstrap.cc:654] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.069758 25338 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: No bootstrap required, opened a new log
I20250114 20:58:57.070254 25338 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.014s	user 0.009s	sys 0.003s
I20250114 20:58:57.071943 25336 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: No bootstrap required, opened a new log
I20250114 20:58:57.072132 25337 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: No bootstrap required, opened a new log
I20250114 20:58:57.072309 25336 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.018s	user 0.012s	sys 0.001s
I20250114 20:58:57.072610 25337 ts_tablet_manager.cc:1397] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.023s	user 0.014s	sys 0.005s
I20250114 20:58:57.072758 25338 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.073522 25338 raft_consensus.cc:383] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.073802 25338 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.074476 25336 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.074479 25338 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.075229 25336 raft_consensus.cc:383] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.075520 25336 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.075368 25337 raft_consensus.cc:357] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.076153 25337 raft_consensus.cc:383] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.076510 25337 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.076316 25336 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.077268 25337 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.079140 25336 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.007s	user 0.001s	sys 0.006s
I20250114 20:58:57.080088 25336 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:58:57.085215 25337 ts_tablet_manager.cc:1428] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.012s	user 0.000s	sys 0.009s
I20250114 20:58:57.086216 25336 tablet_bootstrap.cc:654] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.087296 25338 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.017s	user 0.011s	sys 0.003s
I20250114 20:58:57.088303 25337 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:58:57.092105 25338 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:58:57.095288 25336 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: No bootstrap required, opened a new log
I20250114 20:58:57.095817 25336 ts_tablet_manager.cc:1397] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.016s	user 0.010s	sys 0.003s
I20250114 20:58:57.097870 25338 tablet_bootstrap.cc:654] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.098408 25336 raft_consensus.cc:357] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.099150 25336 raft_consensus.cc:383] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.099458 25336 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.100275 25336 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.102051 25337 tablet_bootstrap.cc:654] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.102784 25336 ts_tablet_manager.cc:1428] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.007s	user 0.005s	sys 0.000s
I20250114 20:58:57.105365 25336 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:58:57.103435 25338 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: No bootstrap required, opened a new log
I20250114 20:58:57.107842 25338 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.016s	user 0.010s	sys 0.002s
I20250114 20:58:57.109760 25337 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: No bootstrap required, opened a new log
I20250114 20:58:57.110283 25337 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.022s	user 0.008s	sys 0.003s
I20250114 20:58:57.110505 25336 tablet_bootstrap.cc:654] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.110462 25338 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.111440 25338 raft_consensus.cc:383] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.111793 25338 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.112485 25338 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.113001 25337 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.113750 25337 raft_consensus.cc:383] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.115233 25337 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.114486 25338 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:58:57.115906 25337 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.116922 25338 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:58:57.117166 25336 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: No bootstrap required, opened a new log
I20250114 20:58:57.117640 25336 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.012s	user 0.010s	sys 0.001s
I20250114 20:58:57.118315 25337 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.008s	user 0.005s	sys 0.000s
I20250114 20:58:57.119177 25337 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:58:57.120630 25336 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.121335 25336 raft_consensus.cc:383] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.121606 25336 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.122375 25336 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.123251 25338 tablet_bootstrap.cc:654] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.125299 25337 tablet_bootstrap.cc:654] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.130702 25336 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.013s	user 0.005s	sys 0.000s
I20250114 20:58:57.133144 25336 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:58:57.133690 25338 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: No bootstrap required, opened a new log
I20250114 20:58:57.134181 25338 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.017s	user 0.016s	sys 0.001s
I20250114 20:58:57.134537 25337 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: No bootstrap required, opened a new log
I20250114 20:58:57.135037 25337 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.016s	user 0.008s	sys 0.007s
I20250114 20:58:57.136947 25338 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.137501 25337 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.137763 25338 raft_consensus.cc:383] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.138087 25337 raft_consensus.cc:383] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.138156 25338 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.138511 25337 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.138981 25336 tablet_bootstrap.cc:654] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.138996 25338 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.139222 25337 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.141593 25338 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.007s	user 0.005s	sys 0.000s
I20250114 20:58:57.141609 25337 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.006s	user 0.002s	sys 0.003s
I20250114 20:58:57.142686 25337 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:58:57.142686 25338 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:58:57.148564 25337 tablet_bootstrap.cc:654] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.149585 25338 tablet_bootstrap.cc:654] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Neither blocks nor log segments found. Creating new log.
I20250114 20:58:57.162158 25337 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: No bootstrap required, opened a new log
I20250114 20:58:57.162700 25337 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.020s	user 0.016s	sys 0.003s
I20250114 20:58:57.163256 25338 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: No bootstrap required, opened a new log
I20250114 20:58:57.163779 25338 ts_tablet_manager.cc:1397] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.021s	user 0.019s	sys 0.000s
I20250114 20:58:57.165467 25337 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.166357 25336 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: No bootstrap required, opened a new log
I20250114 20:58:57.166218 25337 raft_consensus.cc:383] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.166246 25338 raft_consensus.cc:357] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.166889 25336 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.034s	user 0.015s	sys 0.012s
I20250114 20:58:57.166832 25337 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.167138 25338 raft_consensus.cc:383] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.167471 25338 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.167838 25337 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.168145 25338 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.169174 25336 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.169982 25336 raft_consensus.cc:383] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:58:57.170310 25336 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:58:57.170423 25338 ts_tablet_manager.cc:1428] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:58:57.171154 25336 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.172679 25337 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.010s	user 0.005s	sys 0.000s
I20250114 20:58:57.175031 25336 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.008s	user 0.002s	sys 0.003s
I20250114 20:58:57.198498 25344 raft_consensus.cc:491] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:57.198927 25344 raft_consensus.cc:513] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.200938 25344 leader_election.cc:290] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.210127 25217 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:58:57.210413 25291 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:58:57.210839 25217 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 0.
I20250114 20:58:57.211014 25291 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 0.
I20250114 20:58:57.211895 25107 leader_election.cc:304] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.212585 25344 raft_consensus.cc:2798] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:57.212831 25344 raft_consensus.cc:491] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:57.213069 25344 raft_consensus.cc:3054] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.217499 25344 raft_consensus.cc:513] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.218851 25344 leader_election.cc:290] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 election: Requested vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.219655 25217 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:58:57.219728 25291 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:58:57.220137 25217 raft_consensus.cc:3054] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.220201 25291 raft_consensus.cc:3054] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.224653 25291 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 1.
I20250114 20:58:57.224655 25217 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 1.
I20250114 20:58:57.225772 25106 leader_election.cc:304] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.226372 25344 raft_consensus.cc:2798] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:57.227329 25344 raft_consensus.cc:695] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 LEADER]: Becoming Leader. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Running, Role: LEADER
I20250114 20:58:57.228091 25344 consensus_queue.cc:237] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.235942 25037 catalog_manager.cc:5526] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e reported cstate change: term changed from 0 to 1, leader changed from <none> to fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129). New cstate: current_term: 1 leader_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:57.262017 25343 raft_consensus.cc:491] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:57.262459 25343 raft_consensus.cc:513] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.264365 25343 leader_election.cc:290] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:58:57.273026 25142 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:58:57.273638 25142 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 0.
I20250114 20:58:57.274358 25217 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:58:57.274765 25256 leader_election.cc:304] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.275025 25217 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 0.
I20250114 20:58:57.275611 25343 raft_consensus.cc:2798] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:57.275965 25343 raft_consensus.cc:491] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:57.276201 25343 raft_consensus.cc:3054] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.281121 25343 raft_consensus.cc:513] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.282572 25343 leader_election.cc:290] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 1 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:58:57.283324 25142 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:58:57.283561 25217 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:58:57.283985 25142 raft_consensus.cc:3054] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.284080 25217 raft_consensus.cc:3054] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.290478 25217 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 1.
I20250114 20:58:57.290478 25142 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 1.
I20250114 20:58:57.291688 25256 leader_election.cc:304] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, a8c725dc9d934a6588b141dced4e3582; no voters: 
I20250114 20:58:57.292270 25343 raft_consensus.cc:2798] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:57.293227 25343 raft_consensus.cc:695] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 LEADER]: Becoming Leader. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Running, Role: LEADER
I20250114 20:58:57.293816 25343 consensus_queue.cc:237] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.300415 25037 catalog_manager.cc:5526] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca reported cstate change: term changed from 0 to 1, leader changed from <none> to 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131). New cstate: current_term: 1 leader_uuid: "671e2dda2f484138aa8edc159d1ae0ca" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: HEALTHY } } }
I20250114 20:58:57.471915 25342 raft_consensus.cc:491] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:57.472402 25342 raft_consensus.cc:513] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.474538 25342 leader_election.cc:290] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
W20250114 20:58:57.482014 25088 auto_rebalancer.cc:227] Could not retrieve cluster info: Service unavailable: Tablet not running
I20250114 20:58:57.484097 25142 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:58:57.484795 25142 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 0.
I20250114 20:58:57.484894 25291 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:58:57.485433 25291 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 0.
I20250114 20:58:57.485682 25182 leader_election.cc:304] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.486413 25342 raft_consensus.cc:2798] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:57.486737 25342 raft_consensus.cc:491] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:57.486974 25342 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.491313 25342 raft_consensus.cc:513] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.492861 25342 leader_election.cc:290] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.493494 25142 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:58:57.494041 25142 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.494053 25291 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:58:57.494596 25291 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.498423 25142 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 1.
I20250114 20:58:57.498762 25291 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 1.
I20250114 20:58:57.499264 25182 leader_election.cc:304] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.499960 25342 raft_consensus.cc:2798] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:57.500746 25342 raft_consensus.cc:695] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 LEADER]: Becoming Leader. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Running, Role: LEADER
I20250114 20:58:57.501343 25342 consensus_queue.cc:237] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.507774 25037 catalog_manager.cc:5526] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 reported cstate change: term changed from 0 to 1, leader changed from <none> to a8c725dc9d934a6588b141dced4e3582 (127.19.228.130). New cstate: current_term: 1 leader_uuid: "a8c725dc9d934a6588b141dced4e3582" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:57.550626 25344 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:57.551080 25344 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.551734 25342 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:58:57.552104 25342 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.552776 25344 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.553570 25217 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:58:57.553777 25342 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.553839 25291 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:58:57.554466 25142 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:58:57.554531 25291 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 0.
I20250114 20:58:57.554576 25217 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 0.
I20250114 20:58:57.555222 25142 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 0.
I20250114 20:58:57.555212 25290 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:58:57.555976 25106 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.556080 25290 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 0.
I20250114 20:58:57.556644 25344 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:57.556972 25344 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:57.557078 25182 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:58:57.557318 25344 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.557827 25342 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:58:57.558094 25342 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:58:57.558347 25342 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.561939 25344 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.562554 25342 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.563311 25344 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 election: Requested vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.564195 25342 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:58:57.564142 25217 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:58:57.564234 25290 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:58:57.564972 25142 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:58:57.565002 25217 raft_consensus.cc:2388] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate fd1c78efd8f3422c821ae66fc7106e7e in current term 1: Already voted for candidate a8c725dc9d934a6588b141dced4e3582 in this term.
I20250114 20:58:57.565224 25290 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:58:57.565881 25142 raft_consensus.cc:2388] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate a8c725dc9d934a6588b141dced4e3582 in current term 1: Already voted for candidate fd1c78efd8f3422c821ae66fc7106e7e in this term.
I20250114 20:58:57.565148 25291 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:58:57.567098 25181 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 1 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582; no voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e
I20250114 20:58:57.567746 25342 raft_consensus.cc:2743] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader election lost for term 1. Reason: could not achieve majority
I20250114 20:58:57.570514 25290 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 1.
I20250114 20:58:57.571489 25106 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: a8c725dc9d934a6588b141dced4e3582
I20250114 20:58:57.572165 25344 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:58:57.572510 25344 raft_consensus.cc:695] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 LEADER]: Becoming Leader. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Running, Role: LEADER
I20250114 20:58:57.573166 25344 consensus_queue.cc:237] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:58:57.579087 25037 catalog_manager.cc:5526] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e reported cstate change: term changed from 0 to 1, leader changed from <none> to fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129). New cstate: current_term: 1 leader_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: UNKNOWN } } }
I20250114 20:58:57.591696 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:58:57.606667 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:57.607318 20370 tablet_replica.cc:331] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: stopping tablet replica
I20250114 20:58:57.607857 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.608232 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.610072 20370 tablet_replica.cc:331] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: stopping tablet replica
I20250114 20:58:57.610553 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:57.611060 20370 pending_rounds.cc:62] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Trying to abort 1 pending ops.
I20250114 20:58:57.611263 20370 pending_rounds.cc:69] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294629292449792 op_type: NO_OP noop_request { }
I20250114 20:58:57.611565 20370 raft_consensus.cc:2883] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:58:57.611814 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.613694 20370 tablet_replica.cc:331] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: stopping tablet replica
I20250114 20:58:57.614143 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.614466 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.615924 20370 tablet_replica.cc:331] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: stopping tablet replica
I20250114 20:58:57.616348 20370 raft_consensus.cc:2238] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:57.616834 20370 pending_rounds.cc:62] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Trying to abort 1 pending ops.
I20250114 20:58:57.617002 20370 pending_rounds.cc:69] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294630706073600 op_type: NO_OP noop_request { }
I20250114 20:58:57.617251 20370 raft_consensus.cc:2883] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:58:57.617486 20370 raft_consensus.cc:2267] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.638098 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
I20250114 20:58:57.648011 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
I20250114 20:58:57.662468 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:57.663151 20370 tablet_replica.cc:331] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: stopping tablet replica
I20250114 20:58:57.663759 20370 raft_consensus.cc:2238] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.664147 20370 raft_consensus.cc:2267] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.665709 20370 tablet_replica.cc:331] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: stopping tablet replica
I20250114 20:58:57.666136 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:57.666604 20370 pending_rounds.cc:62] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Trying to abort 1 pending ops.
I20250114 20:58:57.666741 20370 pending_rounds.cc:69] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294630411558912 op_type: NO_OP noop_request { }
I20250114 20:58:57.666966 20370 raft_consensus.cc:2883] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:58:57.667162 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.668965 20370 tablet_replica.cc:331] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: stopping tablet replica
I20250114 20:58:57.669390 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.669679 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.671303 20370 tablet_replica.cc:331] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: stopping tablet replica
I20250114 20:58:57.671756 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.672051 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.692886 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:58:57.703027 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:58:57.717587 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:58:57.718168 20370 tablet_replica.cc:331] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: stopping tablet replica
I20250114 20:58:57.718691 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:58:57.719200 20370 pending_rounds.cc:62] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Trying to abort 1 pending ops.
I20250114 20:58:57.719353 20370 pending_rounds.cc:69] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294629561430016 op_type: NO_OP noop_request { }
I20250114 20:58:57.719631 20370 raft_consensus.cc:2883] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:58:57.719857 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.721639 20370 tablet_replica.cc:331] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: stopping tablet replica
I20250114 20:58:57.722079 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.722389 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.723850 20370 tablet_replica.cc:331] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: stopping tablet replica
I20250114 20:58:57.724246 20370 raft_consensus.cc:2238] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.724538 20370 raft_consensus.cc:2267] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.725903 20370 tablet_replica.cc:331] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: stopping tablet replica
I20250114 20:58:57.726274 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:58:57.726562 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:58:57.746528 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:58:57.757117 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:58:57.762598 25367 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:57.763374 25368 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:58:57.763763 25370 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:58:57.764371 20370 server_base.cc:1034] running on GCE node
I20250114 20:58:57.765318 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:58:57.765599 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:58:57.765838 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888337765824 us; error 0 us; skew 500 ppm
I20250114 20:58:57.766364 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:58:57.768893 20370 webserver.cc:458] Webserver started at http://127.19.228.132:34577/ using document root <none> and password file <none>
I20250114 20:58:57.769433 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:58:57.769692 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:58:57.770020 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:58:57.771163 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "00fbac5707db4feea512f146618c389d"
format_stamp: "Formatted at 2025-01-14 20:58:57 on dist-test-slave-kc3q"
I20250114 20:58:57.775465 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.002s	sys 0.001s
I20250114 20:58:57.778682 25375 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:57.779712 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:58:57.780133 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "00fbac5707db4feea512f146618c389d"
format_stamp: "Formatted at 2025-01-14 20:58:57 on dist-test-slave-kc3q"
I20250114 20:58:57.780517 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:58:57.792208 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:58:57.793300 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:58:57.794926 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:58:57.797271 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:58:57.797559 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:57.797847 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:58:57.798077 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:58:57.837364 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:34683
I20250114 20:58:57.837461 25437 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:34683 every 8 connection(s)
I20250114 20:58:57.850276 25438 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
I20250114 20:58:57.850682 25438 heartbeater.cc:463] Registering TS with master...
I20250114 20:58:57.851365 25438 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:58:57.853108 25037 ts_manager.cc:194] Registered new tserver with Master: 00fbac5707db4feea512f146618c389d (127.19.228.132:34683)
I20250114 20:58:57.854465 25037 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:37272
W20250114 20:58:58.489638 25088 auto_rebalancer.cc:249] could not retrieve auto-rebalancing replica moves: Not found: table bc696e3b35a24f1c9796f8054abefcd9: could not find any suitable replica to move from server fd1c78efd8f3422c821ae66fc7106e7e to server 00fbac5707db4feea512f146618c389d
I20250114 20:58:58.856755 25438 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
W20250114 20:58:59.500348 25088 auto_rebalancer.cc:254] failed to send replica move request: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:58:59.503062 25026 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111) [suppressed 3 similar messages]
W20250114 20:58:59.509572 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:58:59.509943 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:58:59.513617 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:58:59.513952 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:58:59.517570 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:58:59.517889 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:00.534009 25088 auto_rebalancer.cc:254] failed to send replica move request: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:00.538890 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:00.539119 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:00.542639 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:00.542855 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:00.546414 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:59:00.546628 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:59:01.557257 25088 auto_rebalancer.cc:254] failed to send replica move request: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:01.562331 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:01.562562 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:01.566155 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:01.566373 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:01.569919 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:01.570134 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:02.572216 25088 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver a8c725dc9d934a6588b141dced4e3582 not available for placement
W20250114 20:59:03.574025 25088 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver fd1c78efd8f3422c821ae66fc7106e7e not available for placement
W20250114 20:59:04.575906 25088 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver fd1c78efd8f3422c821ae66fc7106e7e not available for placement
I20250114 20:59:04.876278 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:04.882534 25451 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:04.883713 25452 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:04.885607 25454 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:04.886286 20370 server_base.cc:1034] running on GCE node
I20250114 20:59:04.887087 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:04.887261 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:04.887383 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888344887372 us; error 0 us; skew 500 ppm
I20250114 20:59:04.887861 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:04.893563 20370 webserver.cc:458] Webserver started at http://127.19.228.129:34919/ using document root <none> and password file <none>
I20250114 20:59:04.893993 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:04.894132 20370 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:59:04.897610 20370 fs_manager.cc:714] Time spent opening directory manager: real 0.003s	user 0.004s	sys 0.000s
I20250114 20:59:04.900305 25459 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:04.901000 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:59:04.901244 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:59:04.901499 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:04.923066 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:04.924068 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:04.925343 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:04.928503 25466 ts_tablet_manager.cc:542] Loading tablet metadata (0/4 complete)
I20250114 20:59:04.946525 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (4 total tablets, 4 live tablets)
I20250114 20:59:04.946789 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.019s	user 0.001s	sys 0.000s
I20250114 20:59:04.947026 20370 ts_tablet_manager.cc:594] Registering tablets (0/4 complete)
I20250114 20:59:04.951113 25466 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:59:04.960466 25466 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:04.960537 20370 ts_tablet_manager.cc:610] Registered 4 tablets
I20250114 20:59:04.960817 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.014s	user 0.014s	sys 0.000s
I20250114 20:59:04.961127 25466 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap complete.
I20250114 20:59:04.961635 25466 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.011s	user 0.009s	sys 0.000s
I20250114 20:59:04.963346 25466 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:04.963887 25466 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:59:04.964468 25466 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:04.967093 25466 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.005s	user 0.005s	sys 0.000s
I20250114 20:59:04.967941 25466 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:59:04.981637 25466 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20250114 20:59:04.982569 25466 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap complete.
I20250114 20:59:04.983204 25466 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.015s	user 0.010s	sys 0.003s
I20250114 20:59:04.985534 25466 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:04.986433 25466 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:59:04.987138 25466 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:04.989045 25466 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.006s	user 0.002s	sys 0.004s
I20250114 20:59:04.989813 25466 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:59:05.002509 25466 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20250114 20:59:05.003455 25466 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap complete.
I20250114 20:59:05.004202 25466 ts_tablet_manager.cc:1397] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.015s	user 0.010s	sys 0.003s
I20250114 20:59:05.006237 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:36371
I20250114 20:59:05.006305 25526 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:36371 every 8 connection(s)
I20250114 20:59:05.006654 25466 raft_consensus.cc:357] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.007587 25466 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.008265 25466 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.010587 25466 ts_tablet_manager.cc:1428] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
W20250114 20:59:05.010969 25528 tablet.cc:2367] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250114 20:59:05.011466 25466 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap starting.
I20250114 20:59:05.011672 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:05.020952 25531 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:05.022815 25532 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:05.028375 20370 server_base.cc:1034] running on GCE node
W20250114 20:59:05.030160 25535 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:05.031036 25527 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
I20250114 20:59:05.031167 25466 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.031371 25527 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:05.031404 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:05.031886 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:05.032002 25466 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Bootstrap complete.
I20250114 20:59:05.032115 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888345032095 us; error 0 us; skew 500 ppm
I20250114 20:59:05.032332 25527 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:59:05.032721 25466 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent bootstrapping tablet: real 0.021s	user 0.016s	sys 0.003s
I20250114 20:59:05.032850 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:05.035041 25466 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.035775 20370 webserver.cc:458] Webserver started at http://127.19.228.130:33447/ using document root <none> and password file <none>
I20250114 20:59:05.035826 25466 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.036504 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:05.036612 25037 ts_manager.cc:194] Re-registered known tserver with Master: fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371)
I20250114 20:59:05.036772 20370 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:59:05.036616 25466 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.039206 25466 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:59:05.041500 20370 fs_manager.cc:714] Time spent opening directory manager: real 0.003s	user 0.003s	sys 0.000s
I20250114 20:59:05.041785 25037 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:37282
I20250114 20:59:05.044112 25527 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
I20250114 20:59:05.044677 25540 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:05.045493 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:59:05.045743 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "a8c725dc9d934a6588b141dced4e3582"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:59:05.045987 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:05.074151 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:05.075094 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:05.076414 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:05.096871 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (4 total tablets, 4 live tablets)
I20250114 20:59:05.097119 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.019s	user 0.001s	sys 0.000s
I20250114 20:59:05.123710 25547 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:59:05.134683 20370 ts_tablet_manager.cc:610] Registered 4 tablets
I20250114 20:59:05.134936 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.038s	user 0.015s	sys 0.003s
I20250114 20:59:05.137733 25547 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20250114 20:59:05.138535 25547 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Bootstrap complete.
I20250114 20:59:05.139081 25547 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.016s	user 0.012s	sys 0.000s
I20250114 20:59:05.141613 25547 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.142761 25547 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.143647 25547 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.146517 25547 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.007s	user 0.010s	sys 0.000s
I20250114 20:59:05.147315 25547 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:59:05.159443 25547 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.160177 25547 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Bootstrap complete.
I20250114 20:59:05.160820 25547 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.014s	user 0.012s	sys 0.000s
I20250114 20:59:05.162911 25547 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.163447 25547 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.164135 25547 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.166457 25547 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:05.167084 25547 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:59:05.178771 25547 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.179759 25547 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Bootstrap complete.
I20250114 20:59:05.180459 25547 ts_tablet_manager.cc:1397] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.014s	user 0.009s	sys 0.004s
I20250114 20:59:05.183406 25547 raft_consensus.cc:357] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.184296 25547 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.185237 25547 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.187765 25547 ts_tablet_manager.cc:1428] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.007s	user 0.004s	sys 0.004s
I20250114 20:59:05.188663 25547 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Bootstrap starting.
I20250114 20:59:05.189917 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:39243
I20250114 20:59:05.190063 25608 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:39243 every 8 connection(s)
I20250114 20:59:05.196218 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:59:05.204313 25547 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.205351 25547 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Bootstrap complete.
I20250114 20:59:05.206727 25547 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Time spent bootstrapping tablet: real 0.018s	user 0.009s	sys 0.007s
W20250114 20:59:05.207751 25614 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:05.207924 25613 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:05.211279 25547 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.212320 25547 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.213596 20370 server_base.cc:1034] running on GCE node
I20250114 20:59:05.213179 25547 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.215399 25609 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
W20250114 20:59:05.215432 25616 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:05.215719 25547 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582: Time spent starting tablet: real 0.007s	user 0.005s	sys 0.002s
I20250114 20:59:05.216015 25609 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:05.216691 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:05.216915 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:05.217026 25609 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:59:05.217096 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888345217079 us; error 0 us; skew 500 ppm
I20250114 20:59:05.217835 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:05.220773 20370 webserver.cc:458] Webserver started at http://127.19.228.131:38729/ using document root <none> and password file <none>
I20250114 20:59:05.221467 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:05.221740 20370 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:59:05.221868 25037 ts_manager.cc:194] Re-registered known tserver with Master: a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:05.226979 20370 fs_manager.cc:714] Time spent opening directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250114 20:59:05.229461 25037 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:37288
I20250114 20:59:05.231346 25621 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:05.232215 20370 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.000s
I20250114 20:59:05.232473 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "671e2dda2f484138aa8edc159d1ae0ca"
format_stamp: "Formatted at 2025-01-14 20:58:55 on dist-test-slave-kc3q"
I20250114 20:59:05.232749 25609 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
I20250114 20:59:05.232822 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestHandlingFailedTservers.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:05.250072 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:05.251333 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:05.252789 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:05.277590 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (4 total tablets, 4 live tablets)
I20250114 20:59:05.277885 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.023s	user 0.000s	sys 0.001s
I20250114 20:59:05.282639 25628 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:59:05.292632 25628 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.293107 20370 ts_tablet_manager.cc:610] Registered 4 tablets
I20250114 20:59:05.293223 25628 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap complete.
I20250114 20:59:05.293366 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s	user 0.013s	sys 0.001s
I20250114 20:59:05.293877 25628 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.011s	user 0.010s	sys 0.000s
I20250114 20:59:05.295935 25628 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.296481 25628 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.297052 25628 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.299464 25628 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.005s	user 0.006s	sys 0.000s
I20250114 20:59:05.300076 25628 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:59:05.312299 25628 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.313143 25628 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap complete.
I20250114 20:59:05.313727 25628 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.014s	user 0.010s	sys 0.001s
I20250114 20:59:05.315959 25628 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.316553 25628 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.317139 25628 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.319080 25628 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.005s	user 0.001s	sys 0.003s
I20250114 20:59:05.319878 25628 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:59:05.334775 25628 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:05.335629 25628 tablet_bootstrap.cc:492] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap complete.
I20250114 20:59:05.336277 25628 ts_tablet_manager.cc:1397] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.017s	user 0.011s	sys 0.004s
I20250114 20:59:05.338919 25628 raft_consensus.cc:357] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.339735 25628 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.340461 25628 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.342796 25628 ts_tablet_manager.cc:1428] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.006s	user 0.003s	sys 0.000s
I20250114 20:59:05.343637 25628 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap starting.
I20250114 20:59:05.347384 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:37755
I20250114 20:59:05.347496 25688 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:37755 every 8 connection(s)
I20250114 20:59:05.359169 25628 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20250114 20:59:05.360062 25628 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Bootstrap complete.
I20250114 20:59:05.360816 25628 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent bootstrapping tablet: real 0.017s	user 0.012s	sys 0.003s
I20250114 20:59:05.363767 25628 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.364226 25689 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33749
I20250114 20:59:05.364784 25628 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Initialized, Role: FOLLOWER
I20250114 20:59:05.364902 25689 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:05.365558 25628 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:05.365942 25689 heartbeater.cc:510] Master 127.19.228.190:33749 requested a full tablet report, sending...
I20250114 20:59:05.367480 25628 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:59:05.369848 25037 ts_manager.cc:194] Re-registered known tserver with Master: 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:05.374450 25037 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:37296
I20250114 20:59:05.376777 25689 heartbeater.cc:502] Master 127.19.228.190:33749 was elected leader, sending a full tablet report...
W20250114 20:59:05.596367 25088 auto_rebalancer.cc:254] failed to send replica move request: Illegal state: Replica fd1c78efd8f3422c821ae66fc7106e7e is not leader of this config. Role: FOLLOWER. Consensus state: current_term: 1 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } }
W20250114 20:59:05.600263 25088 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet 7a28538d10344370994db07360ffc8bd, TS 671e2dda2f484138aa8edc159d1ae0ca -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:05.600690 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet 7a28538d10344370994db07360ffc8bd, TS 671e2dda2f484138aa8edc159d1ae0ca -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:05.612269 25088 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet 3ceb4d4fab52486690cf03f728674a66, TS fd1c78efd8f3422c821ae66fc7106e7e -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:05.612671 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet 3ceb4d4fab52486690cf03f728674a66, TS fd1c78efd8f3422c821ae66fc7106e7e -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:05.623592 25088 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet 45c927b20e4641ea9e74d27149487061, TS a8c725dc9d934a6588b141dced4e3582 -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:05.623847 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet 45c927b20e4641ea9e74d27149487061, TS a8c725dc9d934a6588b141dced4e3582 -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.400483 25697 raft_consensus.cc:491] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:06.400971 25697 raft_consensus.cc:513] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.402747 25697 leader_election.cc:290] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:06.422327 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:59:06.423136 25583 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 1.
I20250114 20:59:06.423346 25663 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:59:06.424005 25663 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 1.
I20250114 20:59:06.424387 25462 leader_election.cc:304] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:59:06.425146 25697 raft_consensus.cc:2798] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:59:06.425411 25697 raft_consensus.cc:491] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:06.425623 25697 raft_consensus.cc:3054] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.430610 25697 raft_consensus.cc:513] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.432003 25697 leader_election.cc:290] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 2 election: Requested vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:06.432837 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:59:06.432950 25663 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "81b27eb642254a36a085d92231e1620f" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:59:06.433329 25583 raft_consensus.cc:3054] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.433495 25663 raft_consensus.cc:3054] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.438318 25583 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 2.
I20250114 20:59:06.438357 25663 raft_consensus.cc:2463] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 2.
I20250114 20:59:06.439265 25462 leader_election.cc:304] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:59:06.439898 25697 raft_consensus.cc:2798] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:59:06.440883 25697 raft_consensus.cc:695] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 2 LEADER]: Becoming Leader. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Running, Role: LEADER
I20250114 20:59:06.441725 25697 consensus_queue.cc:237] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.447814 25037 catalog_manager.cc:5526] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:06.604468 25708 raft_consensus.cc:491] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:06.604872 25708 raft_consensus.cc:513] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.606716 25708 leader_election.cc:290] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:06.616762 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:59:06.617547 25501 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 1.
I20250114 20:59:06.618211 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:59:06.618613 25624 leader_election.cc:304] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:59:06.618748 25583 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 1.
I20250114 20:59:06.619283 25708 raft_consensus.cc:2798] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:59:06.619657 25708 raft_consensus.cc:491] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:06.619982 25708 raft_consensus.cc:3054] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.624651 25708 raft_consensus.cc:513] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.626056 25708 leader_election.cc:290] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:06.626878 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:59:06.627058 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "45c927b20e4641ea9e74d27149487061" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:59:06.627463 25501 raft_consensus.cc:3054] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.627575 25583 raft_consensus.cc:3054] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.634388 25501 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 2.
I20250114 20:59:06.635468 25624 leader_election.cc:304] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
W20250114 20:59:06.635859 25088 auto_rebalancer.cc:254] failed to send replica move request: Illegal state: Replica 671e2dda2f484138aa8edc159d1ae0ca is not leader of this config. Role: FOLLOWER. Consensus state: current_term: 2 committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } }
I20250114 20:59:06.636291 25708 raft_consensus.cc:2798] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:59:06.637591 25708 raft_consensus.cc:695] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Becoming Leader. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Running, Role: LEADER
I20250114 20:59:06.638392 25708 consensus_queue.cc:237] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.638888 25583 raft_consensus.cc:2463] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 2.
W20250114 20:59:06.644752 25088 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet 45c927b20e4641ea9e74d27149487061, TS a8c725dc9d934a6588b141dced4e3582 -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:06.645017 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet 45c927b20e4641ea9e74d27149487061, TS a8c725dc9d934a6588b141dced4e3582 -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.646385 25037 catalog_manager.cc:5526] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "671e2dda2f484138aa8edc159d1ae0ca" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: HEALTHY } } }
W20250114 20:59:06.648237 25088 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet 3ceb4d4fab52486690cf03f728674a66, TS fd1c78efd8f3422c821ae66fc7106e7e -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:06.648492 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet 3ceb4d4fab52486690cf03f728674a66, TS fd1c78efd8f3422c821ae66fc7106e7e -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:06.651891 25088 auto_rebalancer.cc:663] Could not move replica: Incomplete: tablet 81b27eb642254a36a085d92231e1620f, TS 671e2dda2f484138aa8edc159d1ae0ca -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
W20250114 20:59:06.652118 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Incomplete: tablet 81b27eb642254a36a085d92231e1620f, TS 671e2dda2f484138aa8edc159d1ae0ca -> TS 00fbac5707db4feea512f146618c389d move failed, destination replica disappeared from tablet's Raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.658335 25708 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:06.658735 25708 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.660192 25708 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:06.660876 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:59:06.661103 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:59:06.661620 25501 raft_consensus.cc:2405] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 671e2dda2f484138aa8edc159d1ae0ca for term 2 because replica has last-logged OpId of term: 1 index: 1, which is greater than that of the candidate, which has last-logged OpId of term: 0 index: 0.
I20250114 20:59:06.661641 25583 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 1.
I20250114 20:59:06.662739 25624 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, a8c725dc9d934a6588b141dced4e3582; no voters: fd1c78efd8f3422c821ae66fc7106e7e
I20250114 20:59:06.663328 25708 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:59:06.663620 25708 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:06.663858 25708 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.668130 25708 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.669452 25708 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:06.670157 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:59:06.670337 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:59:06.670717 25501 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.670811 25583 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.675103 25583 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 2.
I20250114 20:59:06.675206 25501 raft_consensus.cc:2405] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 671e2dda2f484138aa8edc159d1ae0ca for term 2 because replica has last-logged OpId of term: 1 index: 1, which is greater than that of the candidate, which has last-logged OpId of term: 0 index: 0.
I20250114 20:59:06.675985 25624 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, a8c725dc9d934a6588b141dced4e3582; no voters: 
I20250114 20:59:06.676584 25708 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:59:06.676929 25708 raft_consensus.cc:695] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Becoming Leader. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Running, Role: LEADER
I20250114 20:59:06.677628 25708 consensus_queue.cc:237] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.683879 25037 catalog_manager.cc:5526] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca reported cstate change: term changed from 1 to 2, leader changed from fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129) to 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131). New cstate: current_term: 2 leader_uuid: "671e2dda2f484138aa8edc159d1ae0ca" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:06.688299 25697 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:06.688724 25697 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.690613 25697 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:06.691346 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:59:06.691738 25663 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:59:06.692006 25583 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 2.
I20250114 20:59:06.693074 25462 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 671e2dda2f484138aa8edc159d1ae0ca
I20250114 20:59:06.693663 25697 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250114 20:59:06.693907 25697 raft_consensus.cc:491] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:06.694110 25697 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Advancing to term 3
I20250114 20:59:06.698527 25697 raft_consensus.cc:513] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.699925 25697 leader_election.cc:290] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 3 election: Requested vote from peers a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:06.700584 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:59:06.700857 25663 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "7a28538d10344370994db07360ffc8bd" candidate_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:59:06.701058 25583 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Advancing to term 3
I20250114 20:59:06.705857 25583 raft_consensus.cc:2463] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate fd1c78efd8f3422c821ae66fc7106e7e in term 3.
I20250114 20:59:06.706676 25462 leader_election.cc:304] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 671e2dda2f484138aa8edc159d1ae0ca
I20250114 20:59:06.707291 25697 raft_consensus.cc:2798] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Leader election won for term 3
I20250114 20:59:06.707741 25697 raft_consensus.cc:695] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 3 LEADER]: Becoming Leader. State: Replica: fd1c78efd8f3422c821ae66fc7106e7e, State: Running, Role: LEADER
I20250114 20:59:06.708488 25697 consensus_queue.cc:237] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.714416 25037 catalog_manager.cc:5526] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e reported cstate change: term changed from 2 to 3, leader changed from 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131) to fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129). New cstate: current_term: 3 leader_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:06.773672 25708 raft_consensus.cc:491] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:06.774108 25708 raft_consensus.cc:513] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.775624 25708 leader_election.cc:290] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:06.776405 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:59:06.776602 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582" is_pre_election: true
I20250114 20:59:06.776996 25501 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 1.
I20250114 20:59:06.777343 25583 raft_consensus.cc:2405] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 671e2dda2f484138aa8edc159d1ae0ca for term 2 because replica has last-logged OpId of term: 1 index: 1, which is greater than that of the candidate, which has last-logged OpId of term: 0 index: 0.
I20250114 20:59:06.777837 25624 leader_election.cc:304] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:59:06.778399 25708 raft_consensus.cc:2798] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250114 20:59:06.778673 25708 raft_consensus.cc:491] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:06.778905 25708 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.783608 25708 raft_consensus.cc:513] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.784961 25708 leader_election.cc:290] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:06.785728 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:59:06.785877 25583 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "671e2dda2f484138aa8edc159d1ae0ca" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a8c725dc9d934a6588b141dced4e3582"
I20250114 20:59:06.786185 25501 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.786352 25583 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 1 FOLLOWER]: Advancing to term 2
I20250114 20:59:06.790503 25501 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 671e2dda2f484138aa8edc159d1ae0ca in term 2.
I20250114 20:59:06.790717 25583 raft_consensus.cc:2405] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Leader election vote request: Denying vote to candidate 671e2dda2f484138aa8edc159d1ae0ca for term 2 because replica has last-logged OpId of term: 1 index: 1, which is greater than that of the candidate, which has last-logged OpId of term: 0 index: 0.
I20250114 20:59:06.791478 25624 leader_election.cc:304] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 671e2dda2f484138aa8edc159d1ae0ca, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:59:06.792086 25708 raft_consensus.cc:2798] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Leader election won for term 2
I20250114 20:59:06.792459 25708 raft_consensus.cc:695] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Becoming Leader. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Running, Role: LEADER
I20250114 20:59:06.793143 25708 consensus_queue.cc:237] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.799083 25037 catalog_manager.cc:5526] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca reported cstate change: term changed from 1 to 2, leader changed from a8c725dc9d934a6588b141dced4e3582 (127.19.228.130) to 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131). New cstate: current_term: 2 leader_uuid: "671e2dda2f484138aa8edc159d1ae0ca" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:06.866411 25717 raft_consensus.cc:491] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:06.866459 25583 raft_consensus.cc:1270] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Refusing update from remote peer fd1c78efd8f3422c821ae66fc7106e7e: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:59:06.866945 25717 raft_consensus.cc:513] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.867959 25697 consensus_queue.cc:1035] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Connected to new peer: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:06.868990 25717 leader_election.cc:290] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:06.881913 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" is_pre_election: true
I20250114 20:59:06.882653 25501 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 2.
I20250114 20:59:06.883728 25543 leader_election.cc:304] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 
I20250114 20:59:06.884413 25717 raft_consensus.cc:2798] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250114 20:59:06.884785 25717 raft_consensus.cc:491] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:06.885097 25717 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Advancing to term 3
I20250114 20:59:06.886343 25663 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca" is_pre_election: true
I20250114 20:59:06.891700 25663 raft_consensus.cc:1270] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Refusing update from remote peer fd1c78efd8f3422c821ae66fc7106e7e: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:59:06.892113 25717 raft_consensus.cc:513] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.893213 25697 consensus_queue.cc:1035] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Connected to new peer: Peer: permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:06.896148 25663 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "671e2dda2f484138aa8edc159d1ae0ca"
I20250114 20:59:06.896381 25501 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "3ceb4d4fab52486690cf03f728674a66" candidate_uuid: "a8c725dc9d934a6588b141dced4e3582" candidate_term: 3 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "fd1c78efd8f3422c821ae66fc7106e7e"
I20250114 20:59:06.897042 25501 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Advancing to term 3
I20250114 20:59:06.898038 25717 leader_election.cc:290] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 3 election: Requested vote from peers fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371), 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:06.904601 25501 raft_consensus.cc:2463] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a8c725dc9d934a6588b141dced4e3582 in term 3.
I20250114 20:59:06.905681 25543 leader_election.cc:304] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: a8c725dc9d934a6588b141dced4e3582, fd1c78efd8f3422c821ae66fc7106e7e; no voters: 671e2dda2f484138aa8edc159d1ae0ca
I20250114 20:59:06.907342 25725 raft_consensus.cc:2798] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Leader election won for term 3
I20250114 20:59:06.907871 25725 raft_consensus.cc:695] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 LEADER]: Becoming Leader. State: Replica: a8c725dc9d934a6588b141dced4e3582, State: Running, Role: LEADER
I20250114 20:59:06.908599 25725 consensus_queue.cc:237] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:06.922232 25037 catalog_manager.cc:5526] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 reported cstate change: term changed from 2 to 3, leader changed from 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131) to a8c725dc9d934a6588b141dced4e3582 (127.19.228.130). New cstate: current_term: 3 leader_uuid: "a8c725dc9d934a6588b141dced4e3582" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:07.068499 25583 raft_consensus.cc:1270] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Refusing update from remote peer 671e2dda2f484138aa8edc159d1ae0ca: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:59:07.069584 25708 consensus_queue.cc:1035] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Connected to new peer: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.079571 25501 raft_consensus.cc:1270] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Refusing update from remote peer 671e2dda2f484138aa8edc159d1ae0ca: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250114 20:59:07.081203 25583 raft_consensus.cc:1235] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Rejecting Update request from peer 671e2dda2f484138aa8edc159d1ae0ca for earlier term 2. Current term is 3. Ops: []
I20250114 20:59:07.081406 25708 consensus_queue.cc:1035] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Connected to new peer: Peer: permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.082808 25730 consensus_queue.cc:1046] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Peer responded invalid term: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 }, Status: INVALID_TERM, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:59:07.083707 25730 raft_consensus.cc:3049] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Stepping down as leader of term 2
I20250114 20:59:07.084039 25730 raft_consensus.cc:738] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Running, Role: LEADER
I20250114 20:59:07.084802 25730 consensus_queue.cc:260] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 2.1, Last appended by leader: 1, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:07.086445 25730 raft_consensus.cc:3054] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Advancing to term 3
I20250114 20:59:07.147056 25663 raft_consensus.cc:1270] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Refusing update from remote peer fd1c78efd8f3422c821ae66fc7106e7e: Log matching property violated. Preceding OpId in replica: term: 2 index: 1. Preceding OpId from leader: term: 3 index: 2. (index mismatch)
I20250114 20:59:07.148310 25724 consensus_queue.cc:1035] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Connected to new peer: Peer: permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.148911 25724 consensus_queue.cc:1225] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Peer 671e2dda2f484138aa8edc159d1ae0ca log is divergent from this leader: its last log entry 2.1 is not in this leader's log and it has not received anything from this leader yet. Falling back to committed index 0
I20250114 20:59:07.151091 25663 pending_rounds.cc:77] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Aborting all ops after (but not including) 0
I20250114 20:59:07.151333 25663 pending_rounds.cc:99] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca: Aborting uncommitted NO_OP operation due to leader change: 2.1
I20250114 20:59:07.151506 25663 raft_consensus.cc:2883] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: NO_OP replication failed: Aborted: Op aborted by new leader
I20250114 20:59:07.157290 25583 raft_consensus.cc:1270] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Refusing update from remote peer fd1c78efd8f3422c821ae66fc7106e7e: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 3 index: 2. (index mismatch)
I20250114 20:59:07.158924 25697 consensus_queue.cc:1035] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Connected to new peer: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.181383 25583 raft_consensus.cc:1235] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 LEADER]: Rejecting Update request from peer 671e2dda2f484138aa8edc159d1ae0ca for earlier term 2. Current term is 3. Ops: []
I20250114 20:59:07.182598 25730 consensus_queue.cc:1046] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Peer responded invalid term: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 }, Status: INVALID_TERM, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.183231 25730 consensus_queue.cc:1225] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Peer a8c725dc9d934a6588b141dced4e3582 log is divergent from this leader: its last log entry 3.2 is not in this leader's log and it has not received anything from this leader yet. Falling back to committed index 0
I20250114 20:59:07.183602 25708 raft_consensus.cc:3049] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Stepping down as leader of term 2
I20250114 20:59:07.183851 25708 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Becoming Follower/Learner. State: Replica: 671e2dda2f484138aa8edc159d1ae0ca, State: Running, Role: LEADER
I20250114 20:59:07.184398 25708 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 2.1, Last appended by leader: 1, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } }
I20250114 20:59:07.185185 25708 raft_consensus.cc:3054] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Advancing to term 3
I20250114 20:59:07.321336 25501 raft_consensus.cc:1270] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Refusing update from remote peer a8c725dc9d934a6588b141dced4e3582: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 3 index: 2. (index mismatch)
I20250114 20:59:07.322500 25725 consensus_queue.cc:1035] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Connected to new peer: Peer: permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.334895 25663 raft_consensus.cc:1270] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Refusing update from remote peer a8c725dc9d934a6588b141dced4e3582: Log matching property violated. Preceding OpId in replica: term: 2 index: 1. Preceding OpId from leader: term: 3 index: 2. (index mismatch)
I20250114 20:59:07.336637 25725 consensus_queue.cc:1035] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Connected to new peer: Peer: permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:07.337374 25725 consensus_queue.cc:1225] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Peer 671e2dda2f484138aa8edc159d1ae0ca log is divergent from this leader: its last log entry 2.1 is not in this leader's log and it has not received anything from this leader yet. Falling back to committed index 0
I20250114 20:59:07.339677 25663 pending_rounds.cc:77] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Aborting all ops after (but not including) 0
I20250114 20:59:07.339895 25663 pending_rounds.cc:99] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca: Aborting uncommitted NO_OP operation due to leader change: 2.1
I20250114 20:59:07.340078 25663 raft_consensus.cc:2883] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: NO_OP replication failed: Aborted: Op aborted by new leader
I20250114 20:59:07.661242 25663 consensus_queue.cc:237] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:07.666545 25583 raft_consensus.cc:1270] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Refusing update from remote peer 671e2dda2f484138aa8edc159d1ae0ca: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:59:07.666792 25501 raft_consensus.cc:1270] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Refusing update from remote peer 671e2dda2f484138aa8edc159d1ae0ca: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:59:07.668435 25708 consensus_queue.cc:1035] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Connected to new peer: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:07.669463 25734 consensus_queue.cc:1035] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [LEADER]: Connected to new peer: Peer: permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:07.676012 25730 raft_consensus.cc:2949] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
W20250114 20:59:07.678942 25623 consensus_peers.cc:487] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca -> Peer 00fbac5707db4feea512f146618c389d (127.19.228.132:34683): Couldn't send request to peer 00fbac5707db4feea512f146618c389d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 45c927b20e4641ea9e74d27149487061. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:07.678323 25501 raft_consensus.cc:2949] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.682221 25583 raft_consensus.cc:2949] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.687091 25037 catalog_manager.cc:5526] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca reported cstate change: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New cstate: current_term: 2 leader_uuid: "671e2dda2f484138aa8edc159d1ae0ca" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:07.687583 25500 consensus_queue.cc:237] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:07.695909 25583 raft_consensus.cc:1270] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Refusing update from remote peer fd1c78efd8f3422c821ae66fc7106e7e: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:59:07.697331 25697 consensus_queue.cc:1035] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Connected to new peer: Peer: permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:07.700646 25663 raft_consensus.cc:1270] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Refusing update from remote peer fd1c78efd8f3422c821ae66fc7106e7e: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 2 index: 3. (index mismatch)
I20250114 20:59:07.702780 25724 consensus_queue.cc:1035] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [LEADER]: Connected to new peer: Peer: permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:07.704615 25724 raft_consensus.cc:2949] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 2 LEADER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.706161 25583 raft_consensus.cc:2949] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
W20250114 20:59:07.711504 25461 consensus_peers.cc:487] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e -> Peer 00fbac5707db4feea512f146618c389d (127.19.228.132:34683): Couldn't send request to peer 00fbac5707db4feea512f146618c389d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 81b27eb642254a36a085d92231e1620f. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:07.716400 25662 raft_consensus.cc:2949] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Committing config change with OpId 2.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.717644 25582 consensus_queue.cc:237] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.2, Last appended by leader: 1, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:07.731366 25662 raft_consensus.cc:1270] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Refusing update from remote peer a8c725dc9d934a6588b141dced4e3582: Log matching property violated. Preceding OpId in replica: term: 3 index: 2. Preceding OpId from leader: term: 3 index: 3. (index mismatch)
I20250114 20:59:07.731071 25038 catalog_manager.cc:5526] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e reported cstate change: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New cstate: current_term: 2 leader_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:07.735116 25725 consensus_queue.cc:1035] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Connected to new peer: Peer: permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:07.737994 25500 raft_consensus.cc:1270] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Refusing update from remote peer a8c725dc9d934a6588b141dced4e3582: Log matching property violated. Preceding OpId in replica: term: 3 index: 2. Preceding OpId from leader: term: 3 index: 3. (index mismatch)
I20250114 20:59:07.739256 25725 consensus_queue.cc:1035] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [LEADER]: Connected to new peer: Peer: permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
W20250114 20:59:07.741708 25542 consensus_peers.cc:487] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 -> Peer 00fbac5707db4feea512f146618c389d (127.19.228.132:34683): Couldn't send request to peer 00fbac5707db4feea512f146618c389d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 3ceb4d4fab52486690cf03f728674a66. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:07.743072 25723 raft_consensus.cc:2949] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.747534 25662 raft_consensus.cc:2949] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.747514 25500 raft_consensus.cc:2949] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } } }
I20250114 20:59:07.757875 25037 catalog_manager.cc:5526] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 reported cstate change: config changed from index -1 to 3, NON_VOTER 00fbac5707db4feea512f146618c389d (127.19.228.132) added. New cstate: current_term: 3 leader_uuid: "a8c725dc9d934a6588b141dced4e3582" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:08.095856 25742 ts_tablet_manager.cc:927] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: Initiating tablet copy from peer fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371)
I20250114 20:59:08.097586 25742 tablet_copy_client.cc:323] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.129:36371
I20250114 20:59:08.107434 25511 tablet_copy_service.cc:140] P fd1c78efd8f3422c821ae66fc7106e7e: Received BeginTabletCopySession request for tablet 81b27eb642254a36a085d92231e1620f from peer 00fbac5707db4feea512f146618c389d ({username='slave'} at 127.0.0.1:46242)
I20250114 20:59:08.107939 25511 tablet_copy_service.cc:161] P fd1c78efd8f3422c821ae66fc7106e7e: Beginning new tablet copy session on tablet 81b27eb642254a36a085d92231e1620f from peer 00fbac5707db4feea512f146618c389d at {username='slave'} at 127.0.0.1:46242: session id = 00fbac5707db4feea512f146618c389d-81b27eb642254a36a085d92231e1620f
I20250114 20:59:08.114547 25511 tablet_copy_source_session.cc:215] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:08.117326 25742 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 81b27eb642254a36a085d92231e1620f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:08.129371 25742 tablet_copy_client.cc:806] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:08.129933 25742 tablet_copy_client.cc:670] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:08.133170 25742 tablet_copy_client.cc:538] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:08.139521 25742 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: Bootstrap starting.
I20250114 20:59:08.157279 25742 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: Bootstrap replayed 1/1 log segments. Stats: ops{read=3 overwritten=0 applied=3 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:08.158082 25742 tablet_bootstrap.cc:492] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: Bootstrap complete.
I20250114 20:59:08.158675 25742 ts_tablet_manager.cc:1397] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: Time spent bootstrapping tablet: real 0.019s	user 0.012s	sys 0.008s
I20250114 20:59:08.161049 25742 raft_consensus.cc:357] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:08.161712 25742 raft_consensus.cc:738] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 00fbac5707db4feea512f146618c389d, State: Initialized, Role: LEARNER
I20250114 20:59:08.162248 25742 consensus_queue.cc:260] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 3, Last appended: 2.3, Last appended by leader: 3, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } attrs { replace: true } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:08.164902 25742 ts_tablet_manager.cc:1428] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: Time spent starting tablet: real 0.006s	user 0.008s	sys 0.000s
I20250114 20:59:08.166420 25511 tablet_copy_service.cc:342] P fd1c78efd8f3422c821ae66fc7106e7e: Request end of tablet copy session 00fbac5707db4feea512f146618c389d-81b27eb642254a36a085d92231e1620f received from {username='slave'} at 127.0.0.1:46242
I20250114 20:59:08.166818 25511 tablet_copy_service.cc:434] P fd1c78efd8f3422c821ae66fc7106e7e: ending tablet copy session 00fbac5707db4feea512f146618c389d-81b27eb642254a36a085d92231e1620f on tablet 81b27eb642254a36a085d92231e1620f with peer 00fbac5707db4feea512f146618c389d
I20250114 20:59:08.196498 25742 ts_tablet_manager.cc:927] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: Initiating tablet copy from peer a8c725dc9d934a6588b141dced4e3582 (127.19.228.130:39243)
I20250114 20:59:08.198594 25742 tablet_copy_client.cc:323] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:39243
I20250114 20:59:08.218277 25593 tablet_copy_service.cc:140] P a8c725dc9d934a6588b141dced4e3582: Received BeginTabletCopySession request for tablet 3ceb4d4fab52486690cf03f728674a66 from peer 00fbac5707db4feea512f146618c389d ({username='slave'} at 127.0.0.1:52328)
I20250114 20:59:08.218744 25593 tablet_copy_service.cc:161] P a8c725dc9d934a6588b141dced4e3582: Beginning new tablet copy session on tablet 3ceb4d4fab52486690cf03f728674a66 from peer 00fbac5707db4feea512f146618c389d at {username='slave'} at 127.0.0.1:52328: session id = 00fbac5707db4feea512f146618c389d-3ceb4d4fab52486690cf03f728674a66
I20250114 20:59:08.223654 25593 tablet_copy_source_session.cc:215] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:08.224200 25750 ts_tablet_manager.cc:927] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: Initiating tablet copy from peer 671e2dda2f484138aa8edc159d1ae0ca (127.19.228.131:37755)
I20250114 20:59:08.225818 25750 tablet_copy_client.cc:323] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:37755
I20250114 20:59:08.226598 25742 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3ceb4d4fab52486690cf03f728674a66. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:08.241603 25742 tablet_copy_client.cc:806] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:08.241991 25673 tablet_copy_service.cc:140] P 671e2dda2f484138aa8edc159d1ae0ca: Received BeginTabletCopySession request for tablet 45c927b20e4641ea9e74d27149487061 from peer 00fbac5707db4feea512f146618c389d ({username='slave'} at 127.0.0.1:43170)
I20250114 20:59:08.242163 25742 tablet_copy_client.cc:670] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:08.242564 25673 tablet_copy_service.cc:161] P 671e2dda2f484138aa8edc159d1ae0ca: Beginning new tablet copy session on tablet 45c927b20e4641ea9e74d27149487061 from peer 00fbac5707db4feea512f146618c389d at {username='slave'} at 127.0.0.1:43170: session id = 00fbac5707db4feea512f146618c389d-45c927b20e4641ea9e74d27149487061
I20250114 20:59:08.245702 25742 tablet_copy_client.cc:538] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:08.249573 25673 tablet_copy_source_session.cc:215] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:08.252607 25750 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 45c927b20e4641ea9e74d27149487061. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:08.253541 25742 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: Bootstrap starting.
I20250114 20:59:08.266461 25750 tablet_copy_client.cc:806] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:08.266984 25750 tablet_copy_client.cc:670] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:08.270689 25742 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: Bootstrap replayed 1/1 log segments. Stats: ops{read=3 overwritten=0 applied=3 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:08.271370 25750 tablet_copy_client.cc:538] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:08.271615 25742 tablet_bootstrap.cc:492] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: Bootstrap complete.
I20250114 20:59:08.272315 25742 ts_tablet_manager.cc:1397] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: Time spent bootstrapping tablet: real 0.019s	user 0.014s	sys 0.004s
I20250114 20:59:08.274259 25742 raft_consensus.cc:357] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d [term 3 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:08.274938 25742 raft_consensus.cc:738] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d [term 3 LEARNER]: Becoming Follower/Learner. State: Replica: 00fbac5707db4feea512f146618c389d, State: Initialized, Role: LEARNER
I20250114 20:59:08.275528 25742 consensus_queue.cc:260] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 3, Last appended: 3.3, Last appended by leader: 3, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } attrs { replace: true } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:08.277704 25742 ts_tablet_manager.cc:1428] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:08.278532 25750 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: Bootstrap starting.
I20250114 20:59:08.279376 25593 tablet_copy_service.cc:342] P a8c725dc9d934a6588b141dced4e3582: Request end of tablet copy session 00fbac5707db4feea512f146618c389d-3ceb4d4fab52486690cf03f728674a66 received from {username='slave'} at 127.0.0.1:52328
I20250114 20:59:08.279829 25593 tablet_copy_service.cc:434] P a8c725dc9d934a6588b141dced4e3582: ending tablet copy session 00fbac5707db4feea512f146618c389d-3ceb4d4fab52486690cf03f728674a66 on tablet 3ceb4d4fab52486690cf03f728674a66 with peer 00fbac5707db4feea512f146618c389d
I20250114 20:59:08.296034 25750 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: Bootstrap replayed 1/1 log segments. Stats: ops{read=3 overwritten=0 applied=3 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:08.296742 25750 tablet_bootstrap.cc:492] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: Bootstrap complete.
I20250114 20:59:08.297274 25750 ts_tablet_manager.cc:1397] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: Time spent bootstrapping tablet: real 0.019s	user 0.014s	sys 0.005s
I20250114 20:59:08.299441 25750 raft_consensus.cc:357] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:08.300168 25750 raft_consensus.cc:738] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 00fbac5707db4feea512f146618c389d, State: Initialized, Role: LEARNER
I20250114 20:59:08.300634 25750 consensus_queue.cc:260] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 3, Last appended: 2.3, Last appended by leader: 3, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "fd1c78efd8f3422c821ae66fc7106e7e" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 36371 } } peers { permanent_uuid: "a8c725dc9d934a6588b141dced4e3582" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 39243 } attrs { replace: true } } peers { permanent_uuid: "671e2dda2f484138aa8edc159d1ae0ca" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 37755 } } peers { permanent_uuid: "00fbac5707db4feea512f146618c389d" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 34683 } attrs { promote: true } }
I20250114 20:59:08.302353 25750 ts_tablet_manager.cc:1428] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: Time spent starting tablet: real 0.005s	user 0.002s	sys 0.000s
I20250114 20:59:08.303973 25673 tablet_copy_service.cc:342] P 671e2dda2f484138aa8edc159d1ae0ca: Request end of tablet copy session 00fbac5707db4feea512f146618c389d-45c927b20e4641ea9e74d27149487061 received from {username='slave'} at 127.0.0.1:43170
I20250114 20:59:08.304348 25673 tablet_copy_service.cc:434] P 671e2dda2f484138aa8edc159d1ae0ca: ending tablet copy session 00fbac5707db4feea512f146618c389d-45c927b20e4641ea9e74d27149487061 on tablet 45c927b20e4641ea9e74d27149487061 with peer 00fbac5707db4feea512f146618c389d
I20250114 20:59:08.676098 20370 tablet_server.cc:178] TabletServer@127.19.228.129:36371 shutting down...
I20250114 20:59:08.758721 25413 raft_consensus.cc:1212] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.2->[2.3-2.3]   Dedup: 2.3->[]
W20250114 20:59:08.833491 25543 proxy.cc:239] Call had error, refreshing address and retrying: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer [suppressed 8 similar messages]
I20250114 20:59:08.833436 25413 raft_consensus.cc:1212] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d [term 3 LEARNER]: Deduplicated request from leader. Original: 3.2->[3.3-3.3]   Dedup: 3.3->[]
W20250114 20:59:08.839160 25543 consensus_peers.cc:487] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 -> Peer fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371): Couldn't send request to peer fd1c78efd8f3422c821ae66fc7106e7e. Status: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:08.868288 25413 raft_consensus.cc:1212] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.2->[2.3-2.3]   Dedup: 2.3->[]
W20250114 20:59:08.873589 25088 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:59:08.873960 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
I20250114 20:59:08.894878 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:08.895531 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.896169 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:08.896728 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:08.898833 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.899343 20370 raft_consensus.cc:2238] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 3 LEADER]: Raft consensus shutting down.
I20250114 20:59:08.900131 20370 raft_consensus.cc:2267] T 7a28538d10344370994db07360ffc8bd P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:08.902190 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.902779 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:59:08.903813 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P fd1c78efd8f3422c821ae66fc7106e7e [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:08.905844 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.906251 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:08.906672 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P fd1c78efd8f3422c821ae66fc7106e7e [term 3 FOLLOWER]: Raft consensus is shut down!
W20250114 20:59:08.914544 25624 consensus_peers.cc:487] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca -> Peer fd1c78efd8f3422c821ae66fc7106e7e (127.19.228.129:36371): Couldn't send request to peer fd1c78efd8f3422c821ae66fc7106e7e. Status: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:08.934473 20370 tablet_server.cc:195] TabletServer@127.19.228.129:36371 shutdown complete.
I20250114 20:59:08.951527 20370 tablet_server.cc:178] TabletServer@127.19.228.130:39243 shutting down...
W20250114 20:59:08.962007 25088 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:59:08.962379 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
I20250114 20:59:08.971690 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:08.972343 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.972888 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:08.973383 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:08.975227 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.975773 20370 raft_consensus.cc:2238] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:08.976199 20370 raft_consensus.cc:2267] T 7a28538d10344370994db07360ffc8bd P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:08.977897 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.978433 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:08.978842 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P a8c725dc9d934a6588b141dced4e3582 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:08.980599 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:08.981158 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 LEADER]: Raft consensus shutting down.
I20250114 20:59:08.982365 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P a8c725dc9d934a6588b141dced4e3582 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:09.007503 20370 tablet_server.cc:195] TabletServer@127.19.228.130:39243 shutdown complete.
I20250114 20:59:09.020612 20370 tablet_server.cc:178] TabletServer@127.19.228.131:37755 shutting down...
W20250114 20:59:09.025306 25088 auto_rebalancer.cc:663] Could not move replica: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
W20250114 20:59:09.025527 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer
I20250114 20:59:09.037676 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:09.038215 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:09.038694 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 LEADER]: Raft consensus shutting down.
I20250114 20:59:09.039605 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:09.041338 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:09.041762 20370 raft_consensus.cc:2238] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:09.042104 20370 raft_consensus.cc:2267] T 7a28538d10344370994db07360ffc8bd P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:09.043658 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:09.044063 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:09.044428 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P 671e2dda2f484138aa8edc159d1ae0ca [term 2 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:09.045907 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:09.046290 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:09.046661 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P 671e2dda2f484138aa8edc159d1ae0ca [term 3 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:09.068262 20370 tablet_server.cc:195] TabletServer@127.19.228.131:37755 shutdown complete.
I20250114 20:59:09.079171 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:59:09.096354 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:09.097018 20370 tablet_replica.cc:331] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d: stopping tablet replica
I20250114 20:59:09.097555 20370 raft_consensus.cc:2238] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Raft consensus shutting down.
I20250114 20:59:09.097920 20370 raft_consensus.cc:2267] T 45c927b20e4641ea9e74d27149487061 P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Raft consensus is shut down!
I20250114 20:59:09.099634 20370 tablet_replica.cc:331] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d: stopping tablet replica
I20250114 20:59:09.100070 20370 raft_consensus.cc:2238] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d [term 3 LEARNER]: Raft consensus shutting down.
I20250114 20:59:09.100383 20370 raft_consensus.cc:2267] T 3ceb4d4fab52486690cf03f728674a66 P 00fbac5707db4feea512f146618c389d [term 3 LEARNER]: Raft consensus is shut down!
I20250114 20:59:09.101883 20370 tablet_replica.cc:331] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d: stopping tablet replica
I20250114 20:59:09.102274 20370 raft_consensus.cc:2238] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Raft consensus shutting down.
I20250114 20:59:09.102581 20370 raft_consensus.cc:2267] T 81b27eb642254a36a085d92231e1620f P 00fbac5707db4feea512f146618c389d [term 2 LEARNER]: Raft consensus is shut down!
W20250114 20:59:09.662940 25434 debug-util.cc:398] Leaking SignalData structure 0x7b08001fbce0 after lost signal to thread 20373
W20250114 20:59:09.663823 25434 debug-util.cc:398] Leaking SignalData structure 0x7b08000a6ca0 after lost signal to thread 25074
W20250114 20:59:10.038286 25088 auto_rebalancer.cc:254] failed to send replica move request: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:10.043780 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:10.044013 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:10.048190 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:59:10.048432 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:59:10.052417 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:10.052635 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:10.120077 20370 thread.cc:535] Waited for 1000ms trying to join with diag-logger (tid 25434)
W20250114 20:59:11.061969 25088 auto_rebalancer.cc:254] failed to send replica move request: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:11.067000 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:11.067292 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:11.071223 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:11.071511 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:11.075261 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:11.075460 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:11.120514 20370 thread.cc:535] Waited for 2000ms trying to join with diag-logger (tid 25434)
W20250114 20:59:12.087970 25088 auto_rebalancer.cc:254] failed to send replica move request: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:59:12.092952 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
W20250114 20:59:12.093250 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.131:37755: connect: Connection refused (error 111)
I20250114 20:59:12.093855 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
W20250114 20:59:12.097596 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:12.097846 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.130:39243: connect: Connection refused (error 111)
W20250114 20:59:12.102100 25088 auto_rebalancer.cc:663] Could not move replica: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
W20250114 20:59:12.102428 25088 auto_rebalancer.cc:264] scheduled replica move failed to complete: Network error: Client connection negotiation failed: client connection to 127.19.228.129:36371: connect: Connection refused (error 111)
I20250114 20:59:12.107860 20370 master.cc:537] Master@127.19.228.190:33749 shutting down...
I20250114 20:59:12.123660 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:12.124161 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:12.124470 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 81f6f1a81b0441b69e101c4242e5f147: stopping tablet replica
I20250114 20:59:12.142964 20370 master.cc:559] Master@127.19.228.190:33749 shutdown complete.
[       OK ] AutoRebalancerTest.TestHandlingFailedTservers (16822 ms)
[ RUN      ] AutoRebalancerTest.TestDeletedTables
I20250114 20:59:12.174742 20370 internal_mini_cluster.cc:156] Creating distributed mini masters. Addrs: 127.19.228.190:33283
I20250114 20:59:12.175824 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:12.181061 25770 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:12.181700 25771 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:12.182204 25773 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.182787 20370 server_base.cc:1034] running on GCE node
I20250114 20:59:12.184163 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:12.184327 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:12.184437 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888352184425 us; error 0 us; skew 500 ppm
I20250114 20:59:12.184832 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:12.190850 20370 webserver.cc:458] Webserver started at http://127.19.228.190:34577/ using document root <none> and password file <none>
I20250114 20:59:12.191274 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:12.191414 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:59:12.191820 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:59:12.192808 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/master-0-root/instance:
uuid: "1385538f51144fd59540911c9a61d544"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.196831 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:59:12.199826 25778 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.200567 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.000s
I20250114 20:59:12.200824 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/master-0-root
uuid: "1385538f51144fd59540911c9a61d544"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.201081 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/master-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/master-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/master-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:12.210309 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:12.211309 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:12.245232 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.190:33283
I20250114 20:59:12.245316 25829 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.190:33283 every 8 connection(s)
I20250114 20:59:12.248747 25830 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:12.258085 25830 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Bootstrap starting.
I20250114 20:59:12.262112 25830 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:12.265753 25830 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: No bootstrap required, opened a new log
I20250114 20:59:12.267525 25830 raft_consensus.cc:357] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1385538f51144fd59540911c9a61d544" member_type: VOTER }
I20250114 20:59:12.267920 25830 raft_consensus.cc:383] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:12.268142 25830 raft_consensus.cc:738] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1385538f51144fd59540911c9a61d544, State: Initialized, Role: FOLLOWER
I20250114 20:59:12.268647 25830 consensus_queue.cc:260] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1385538f51144fd59540911c9a61d544" member_type: VOTER }
I20250114 20:59:12.269054 25830 raft_consensus.cc:397] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250114 20:59:12.269264 25830 raft_consensus.cc:491] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250114 20:59:12.269488 25830 raft_consensus.cc:3054] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:12.273330 25830 raft_consensus.cc:513] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1385538f51144fd59540911c9a61d544" member_type: VOTER }
I20250114 20:59:12.273782 25830 leader_election.cc:304] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 1385538f51144fd59540911c9a61d544; no voters: 
I20250114 20:59:12.274811 25830 leader_election.cc:290] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250114 20:59:12.275082 25833 raft_consensus.cc:2798] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:12.276262 25833 raft_consensus.cc:695] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 1 LEADER]: Becoming Leader. State: Replica: 1385538f51144fd59540911c9a61d544, State: Running, Role: LEADER
I20250114 20:59:12.276842 25833 consensus_queue.cc:237] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1385538f51144fd59540911c9a61d544" member_type: VOTER }
I20250114 20:59:12.277424 25830 sys_catalog.cc:564] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [sys.catalog]: configured and running, proceeding with master startup.
I20250114 20:59:12.281195 25835 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 1385538f51144fd59540911c9a61d544. Latest consensus state: current_term: 1 leader_uuid: "1385538f51144fd59540911c9a61d544" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1385538f51144fd59540911c9a61d544" member_type: VOTER } }
I20250114 20:59:12.281762 25835 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [sys.catalog]: This master's current role is: LEADER
I20250114 20:59:12.282153 25834 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "1385538f51144fd59540911c9a61d544" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1385538f51144fd59540911c9a61d544" member_type: VOTER } }
I20250114 20:59:12.282533 25834 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [sys.catalog]: This master's current role is: LEADER
I20250114 20:59:12.283644 25840 catalog_manager.cc:1476] Loading table and tablet metadata into memory...
I20250114 20:59:12.288395 25840 catalog_manager.cc:1485] Initializing Kudu cluster ID...
I20250114 20:59:12.290524 20370 internal_mini_cluster.cc:184] Waiting to initialize catalog manager on master 0
I20250114 20:59:12.296133 25840 catalog_manager.cc:1348] Generated new cluster ID: 20e81c045e984f6c9fbcad24ccb125d5
I20250114 20:59:12.296351 25840 catalog_manager.cc:1496] Initializing Kudu internal certificate authority...
I20250114 20:59:12.312841 25840 catalog_manager.cc:1371] Generated new certificate authority record
I20250114 20:59:12.314016 25840 catalog_manager.cc:1505] Loading token signing keys...
I20250114 20:59:12.342710 25840 catalog_manager.cc:5899] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Generated new TSK 0
I20250114 20:59:12.343259 25840 catalog_manager.cc:1515] Initializing in-progress tserver states...
I20250114 20:59:12.357223 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:12.363175 25851 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:12.364094 25852 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.366252 20370 server_base.cc:1034] running on GCE node
W20250114 20:59:12.367120 25854 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.367973 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:12.368162 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:12.368283 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888352368271 us; error 0 us; skew 500 ppm
I20250114 20:59:12.368695 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:12.370929 20370 webserver.cc:458] Webserver started at http://127.19.228.129:35185/ using document root <none> and password file <none>
I20250114 20:59:12.371338 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:12.371493 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:59:12.371762 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:59:12.372745 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/instance:
uuid: "83d3e5ddd1984a80a46be43f24be46b2"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.376941 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:59:12.380298 25859 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.380980 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:59:12.381253 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "83d3e5ddd1984a80a46be43f24be46b2"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.381508 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:12.392911 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:12.393962 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:12.395298 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:12.397455 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:59:12.397639 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.397841 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:59:12.397995 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.434563 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:44683
I20250114 20:59:12.434638 25921 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:44683 every 8 connection(s)
I20250114 20:59:12.438903 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:12.446092 25926 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:12.447700 25927 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.449038 20370 server_base.cc:1034] running on GCE node
W20250114 20:59:12.449791 25929 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.449955 25922 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33283
I20250114 20:59:12.450358 25922 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:12.450681 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:12.450898 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:12.451047 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888352451030 us; error 0 us; skew 500 ppm
I20250114 20:59:12.451004 25922 heartbeater.cc:510] Master 127.19.228.190:33283 requested a full tablet report, sending...
I20250114 20:59:12.451733 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:12.453208 25795 ts_manager.cc:194] Registered new tserver with Master: 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683)
I20250114 20:59:12.454150 20370 webserver.cc:458] Webserver started at http://127.19.228.130:39153/ using document root <none> and password file <none>
I20250114 20:59:12.454602 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:12.454779 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:59:12.455014 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:59:12.454988 25795 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:42762
I20250114 20:59:12.456394 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/instance:
uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.460415 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.001s	sys 0.004s
I20250114 20:59:12.463244 25934 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.463934 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:59:12.464186 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root
uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.464435 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:12.477986 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:12.478996 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:12.480307 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:12.482388 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:59:12.482578 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.482789 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:59:12.483002 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.517122 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.130:45905
I20250114 20:59:12.517212 25996 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.130:45905 every 8 connection(s)
I20250114 20:59:12.521373 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:12.528168 26000 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:12.529233 26001 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.531917 25997 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33283
I20250114 20:59:12.532253 25997 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:12.533123 25997 heartbeater.cc:510] Master 127.19.228.190:33283 requested a full tablet report, sending...
W20250114 20:59:12.533896 26003 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:12.534678 20370 server_base.cc:1034] running on GCE node
I20250114 20:59:12.535099 25795 ts_manager.cc:194] Registered new tserver with Master: dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:12.535478 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:12.535730 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:12.535898 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888352535883 us; error 0 us; skew 500 ppm
I20250114 20:59:12.536415 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:12.536510 25795 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:42770
I20250114 20:59:12.538852 20370 webserver.cc:458] Webserver started at http://127.19.228.131:35537/ using document root <none> and password file <none>
I20250114 20:59:12.539292 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:12.539453 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:59:12.539700 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:59:12.540674 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/instance:
uuid: "887b7f1840654ca4a8278f3aa3eba169"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.544709 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.004s	user 0.005s	sys 0.000s
I20250114 20:59:12.547480 26008 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.548157 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:59:12.548408 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root
uuid: "887b7f1840654ca4a8278f3aa3eba169"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:12.548651 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:12.558413 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:12.559303 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:12.560592 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:12.562588 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:59:12.562767 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.562971 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:59:12.563115 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:12.598220 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.131:32835
I20250114 20:59:12.598318 26070 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.131:32835 every 8 connection(s)
I20250114 20:59:12.610198 26071 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33283
I20250114 20:59:12.610517 26071 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:12.611174 26071 heartbeater.cc:510] Master 127.19.228.190:33283 requested a full tablet report, sending...
I20250114 20:59:12.612847 25795 ts_manager.cc:194] Registered new tserver with Master: 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:12.613265 20370 internal_mini_cluster.cc:371] 3 TS(s) registered with all masters after 0.012117112s
I20250114 20:59:12.614415 25795 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:42784
I20250114 20:59:13.457486 25922 heartbeater.cc:502] Master 127.19.228.190:33283 was elected leader, sending a full tablet report...
I20250114 20:59:13.538820 25997 heartbeater.cc:502] Master 127.19.228.190:33283 was elected leader, sending a full tablet report...
I20250114 20:59:13.617154 26071 heartbeater.cc:502] Master 127.19.228.190:33283 was elected leader, sending a full tablet report...
I20250114 20:59:13.645952 20370 test_util.cc:274] Using random seed: -715823764
I20250114 20:59:13.666138 25795 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:42796:
name: "test-workload"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_""\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:59:13.668354 25795 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table test-workload in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:59:13.715488 25962 tablet_service.cc:1467] Processing CreateTablet for tablet 5f8e59274e0e4d6782564cd3da8080db (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:59:13.716849 25962 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5f8e59274e0e4d6782564cd3da8080db. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.718130 25961 tablet_service.cc:1467] Processing CreateTablet for tablet ee053a9ccc5448e492c8e9ad440cfeb1 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:59:13.718781 26035 tablet_service.cc:1467] Processing CreateTablet for tablet ee053a9ccc5448e492c8e9ad440cfeb1 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:59:13.719408 25961 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ee053a9ccc5448e492c8e9ad440cfeb1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.720069 26035 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ee053a9ccc5448e492c8e9ad440cfeb1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.719779 25960 tablet_service.cc:1467] Processing CreateTablet for tablet be4b7e43ed354fe889c83c2c65a05af7 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:59:13.720971 25960 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet be4b7e43ed354fe889c83c2c65a05af7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.718781 26036 tablet_service.cc:1467] Processing CreateTablet for tablet 5f8e59274e0e4d6782564cd3da8080db (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:59:13.724546 26036 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5f8e59274e0e4d6782564cd3da8080db. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.738301 26092 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:13.741714 26091 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:13.747004 26092 tablet_bootstrap.cc:654] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.747581 26091 tablet_bootstrap.cc:654] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.751461 26035 tablet_service.cc:1467] Processing CreateTablet for tablet be4b7e43ed354fe889c83c2c65a05af7 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:59:13.753517 26035 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet be4b7e43ed354fe889c83c2c65a05af7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.757033 25960 tablet_service.cc:1467] Processing CreateTablet for tablet 070a203b588b46e58804064fe5f00952 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:59:13.758250 25960 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 070a203b588b46e58804064fe5f00952. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.758664 26092 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:13.759176 26092 ts_tablet_manager.cc:1397] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.021s	user 0.012s	sys 0.004s
I20250114 20:59:13.761813 26092 raft_consensus.cc:357] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.764506 26092 raft_consensus.cc:383] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.764840 26092 raft_consensus.cc:738] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.761957 26036 tablet_service.cc:1467] Processing CreateTablet for tablet 070a203b588b46e58804064fe5f00952 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:59:13.764133 26091 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:13.765717 26092 consensus_queue.cc:260] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.766410 26036 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 070a203b588b46e58804064fe5f00952. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.765661 26091 ts_tablet_manager.cc:1397] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.024s	user 0.013s	sys 0.004s
I20250114 20:59:13.772893 26092 ts_tablet_manager.cc:1428] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.013s	user 0.010s	sys 0.000s
I20250114 20:59:13.773840 26092 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:13.774150 26091 raft_consensus.cc:357] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.774899 26091 raft_consensus.cc:383] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.776525 25885 tablet_service.cc:1467] Processing CreateTablet for tablet be4b7e43ed354fe889c83c2c65a05af7 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:59:13.777729 25885 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet be4b7e43ed354fe889c83c2c65a05af7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.779606 26091 raft_consensus.cc:738] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.779839 26092 tablet_bootstrap.cc:654] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.780558 25884 tablet_service.cc:1467] Processing CreateTablet for tablet 070a203b588b46e58804064fe5f00952 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:59:13.780364 26091 consensus_queue.cc:260] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.773576 25887 tablet_service.cc:1467] Processing CreateTablet for tablet 5f8e59274e0e4d6782564cd3da8080db (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:59:13.775087 25886 tablet_service.cc:1467] Processing CreateTablet for tablet ee053a9ccc5448e492c8e9ad440cfeb1 (DEFAULT_TABLE table=test-workload [id=1ae1abdc38cb4632afa1200635244b39]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:59:13.781966 25884 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 070a203b588b46e58804064fe5f00952. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.782819 25886 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ee053a9ccc5448e492c8e9ad440cfeb1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.784083 25887 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5f8e59274e0e4d6782564cd3da8080db. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:13.790203 26091 ts_tablet_manager.cc:1428] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.018s	user 0.004s	sys 0.003s
I20250114 20:59:13.794607 26091 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:13.791960 26092 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:13.799937 26092 ts_tablet_manager.cc:1397] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.026s	user 0.011s	sys 0.000s
I20250114 20:59:13.802338 26092 raft_consensus.cc:357] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.803272 26092 raft_consensus.cc:383] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.803596 26092 raft_consensus.cc:738] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.804266 26092 consensus_queue.cc:260] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.806403 26092 ts_tablet_manager.cc:1428] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250114 20:59:13.810827 26092 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:13.807405 26091 tablet_bootstrap.cc:654] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.816656 26092 tablet_bootstrap.cc:654] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.823064 26091 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:13.823506 26091 ts_tablet_manager.cc:1397] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.029s	user 0.010s	sys 0.002s
I20250114 20:59:13.825945 26091 raft_consensus.cc:357] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.826640 26091 raft_consensus.cc:383] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.826923 26091 raft_consensus.cc:738] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.827525 26091 consensus_queue.cc:260] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.828401 26092 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:13.828737 26098 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:13.828915 26092 ts_tablet_manager.cc:1397] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.018s	user 0.013s	sys 0.004s
I20250114 20:59:13.830633 26091 ts_tablet_manager.cc:1428] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.007s	user 0.003s	sys 0.002s
I20250114 20:59:13.831620 26091 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:13.831571 26092 raft_consensus.cc:357] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.832433 26092 raft_consensus.cc:383] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.832724 26092 raft_consensus.cc:738] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.833468 26092 consensus_queue.cc:260] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.837368 26092 ts_tablet_manager.cc:1428] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.008s	user 0.005s	sys 0.000s
I20250114 20:59:13.837509 26091 tablet_bootstrap.cc:654] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.838352 26092 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:13.842312 26091 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:13.842824 26091 ts_tablet_manager.cc:1397] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.011s	user 0.011s	sys 0.000s
I20250114 20:59:13.843098 26098 tablet_bootstrap.cc:654] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.844172 26092 tablet_bootstrap.cc:654] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.846329 26091 raft_consensus.cc:357] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.847082 26091 raft_consensus.cc:383] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.847399 26091 raft_consensus.cc:738] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.848224 26091 consensus_queue.cc:260] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.849527 26098 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:13.849617 26092 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:13.850199 26098 ts_tablet_manager.cc:1397] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.022s	user 0.012s	sys 0.000s
I20250114 20:59:13.850383 26092 ts_tablet_manager.cc:1397] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.012s	user 0.011s	sys 0.000s
I20250114 20:59:13.850417 26091 ts_tablet_manager.cc:1428] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.007s	user 0.004s	sys 0.000s
I20250114 20:59:13.851706 26091 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:13.852807 26092 raft_consensus.cc:357] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.853605 26092 raft_consensus.cc:383] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.853885 26092 raft_consensus.cc:738] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.854573 26092 consensus_queue.cc:260] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.855746 26098 raft_consensus.cc:357] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.856498 26098 raft_consensus.cc:383] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.857409 26091 tablet_bootstrap.cc:654] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.858297 26092 ts_tablet_manager.cc:1428] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.007s	user 0.004s	sys 0.000s
I20250114 20:59:13.856801 26098 raft_consensus.cc:738] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.862124 26091 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:13.862610 26091 ts_tablet_manager.cc:1397] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.011s	user 0.007s	sys 0.003s
I20250114 20:59:13.862419 26098 consensus_queue.cc:260] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.865461 26098 ts_tablet_manager.cc:1428] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.015s	user 0.005s	sys 0.001s
I20250114 20:59:13.865242 26091 raft_consensus.cc:357] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.866053 26091 raft_consensus.cc:383] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.866389 26091 raft_consensus.cc:738] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.866532 26098 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:13.867137 26091 consensus_queue.cc:260] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.869413 26091 ts_tablet_manager.cc:1428] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.006s	user 0.002s	sys 0.003s
I20250114 20:59:13.871793 26098 tablet_bootstrap.cc:654] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.876605 26098 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:13.877074 26098 ts_tablet_manager.cc:1397] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.011s	user 0.003s	sys 0.007s
I20250114 20:59:13.879360 26098 raft_consensus.cc:357] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.879876 26098 raft_consensus.cc:383] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.880059 26098 raft_consensus.cc:738] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.880487 26098 consensus_queue.cc:260] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.881894 26098 ts_tablet_manager.cc:1428] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.005s	user 0.003s	sys 0.000s
I20250114 20:59:13.882547 26098 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:13.886523 26098 tablet_bootstrap.cc:654] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.890396 26098 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:13.890723 26098 ts_tablet_manager.cc:1397] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.008s	user 0.007s	sys 0.000s
I20250114 20:59:13.892475 26098 raft_consensus.cc:357] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.892961 26098 raft_consensus.cc:383] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.893158 26098 raft_consensus.cc:738] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.893663 26098 consensus_queue.cc:260] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.895664 26098 ts_tablet_manager.cc:1428] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.005s	user 0.006s	sys 0.000s
I20250114 20:59:13.896512 26098 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:13.900689 26098 tablet_bootstrap.cc:654] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:13.904986 26098 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:13.905313 26098 ts_tablet_manager.cc:1397] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.009s	user 0.007s	sys 0.000s
I20250114 20:59:13.906992 26098 raft_consensus.cc:357] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.907528 26098 raft_consensus.cc:383] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:13.907759 26098 raft_consensus.cc:738] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:13.908231 26098 consensus_queue.cc:260] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.909746 26098 ts_tablet_manager.cc:1428] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:59:13.914404 26097 raft_consensus.cc:491] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:13.914870 26097 raft_consensus.cc:513] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.916824 26097 leader_election.cc:290] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835), 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683)
I20250114 20:59:13.925597 26046 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5f8e59274e0e4d6782564cd3da8080db" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "887b7f1840654ca4a8278f3aa3eba169" is_pre_election: true
I20250114 20:59:13.925899 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5f8e59274e0e4d6782564cd3da8080db" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:13.926195 26046 raft_consensus.cc:2463] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 0.
I20250114 20:59:13.926448 25897 raft_consensus.cc:2463] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 0.
I20250114 20:59:13.927042 26095 raft_consensus.cc:491] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:13.927281 25936 leader_election.cc:304] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 887b7f1840654ca4a8278f3aa3eba169, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:13.927506 26095 raft_consensus.cc:513] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.928144 26097 raft_consensus.cc:2798] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:13.928489 26097 raft_consensus.cc:491] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:13.928761 26097 raft_consensus.cc:3054] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.929811 26111 raft_consensus.cc:491] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:13.930002 26095 leader_election.cc:290] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:13.930305 26111 raft_consensus.cc:513] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.932448 26111 leader_election.cc:290] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:13.935022 26097 raft_consensus.cc:513] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.937111 26097 leader_election.cc:290] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 election: Requested vote from peers 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835), 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683)
I20250114 20:59:13.937491 26046 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5f8e59274e0e4d6782564cd3da8080db" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "887b7f1840654ca4a8278f3aa3eba169"
I20250114 20:59:13.937664 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "5f8e59274e0e4d6782564cd3da8080db" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:13.938237 25897 raft_consensus.cc:3054] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.938225 26046 raft_consensus.cc:3054] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.944348 25896 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "070a203b588b46e58804064fe5f00952" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:13.944777 25895 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ee053a9ccc5448e492c8e9ad440cfeb1" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:13.945071 25896 raft_consensus.cc:2463] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:13.945188 25971 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ee053a9ccc5448e492c8e9ad440cfeb1" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" is_pre_election: true
I20250114 20:59:13.944700 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "070a203b588b46e58804064fe5f00952" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" is_pre_election: true
I20250114 20:59:13.946053 26009 leader_election.cc:304] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:13.946082 25972 raft_consensus.cc:2463] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:13.946285 25971 raft_consensus.cc:2463] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:13.947103 26111 raft_consensus.cc:2798] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:13.947472 26111 raft_consensus.cc:491] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:13.945528 25895 raft_consensus.cc:2463] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:13.947970 26011 leader_election.cc:304] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 887b7f1840654ca4a8278f3aa3eba169, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:13.948421 26111 raft_consensus.cc:3054] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.949247 26095 raft_consensus.cc:2798] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:13.949620 26095 raft_consensus.cc:491] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:13.949868 26046 raft_consensus.cc:2463] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 1.
I20250114 20:59:13.949951 26095 raft_consensus.cc:3054] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.950987 25936 leader_election.cc:304] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 887b7f1840654ca4a8278f3aa3eba169, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:13.951690 26097 raft_consensus.cc:2798] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:13.952270 25897 raft_consensus.cc:2463] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 1.
I20250114 20:59:13.954021 26097 raft_consensus.cc:695] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Becoming Leader. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Running, Role: LEADER
I20250114 20:59:13.954634 26097 consensus_queue.cc:237] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:13.956665 26095 raft_consensus.cc:513] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.957157 26111 raft_consensus.cc:513] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.959407 26111 leader_election.cc:290] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Requested vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:13.959859 25895 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "070a203b588b46e58804064fe5f00952" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:13.960348 25895 raft_consensus.cc:3054] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.960461 25971 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "070a203b588b46e58804064fe5f00952" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
I20250114 20:59:13.961045 25971 raft_consensus.cc:3054] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.961195 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ee053a9ccc5448e492c8e9ad440cfeb1" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:13.961766 25897 raft_consensus.cc:3054] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.963835 25794 catalog_manager.cc:5526] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: term changed from 0 to 1, leader changed from <none> to dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130). New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:13.967089 25895 raft_consensus.cc:2463] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:13.967581 25897 raft_consensus.cc:2463] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:13.968067 26009 leader_election.cc:304] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:13.968583 26095 leader_election.cc:290] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Requested vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:13.968947 26009 leader_election.cc:304] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:13.969161 26095 raft_consensus.cc:2798] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:13.969895 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "ee053a9ccc5448e492c8e9ad440cfeb1" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
I20250114 20:59:13.970379 26095 raft_consensus.cc:695] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Becoming Leader. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Running, Role: LEADER
I20250114 20:59:13.970467 25972 raft_consensus.cc:3054] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:13.970714 26116 raft_consensus.cc:2798] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:13.971292 26095 consensus_queue.cc:237] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.975675 25971 raft_consensus.cc:2463] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:13.977488 26116 raft_consensus.cc:695] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Becoming Leader. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Running, Role: LEADER
I20250114 20:59:13.978168 25972 raft_consensus.cc:2463] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:13.978456 26116 consensus_queue.cc:237] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:13.985797 25794 catalog_manager.cc:5526] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: term changed from 0 to 1, leader changed from <none> to 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:13.986956 25794 catalog_manager.cc:5526] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: term changed from 0 to 1, leader changed from <none> to 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:14.001852 26116 raft_consensus.cc:491] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:14.002265 26116 raft_consensus.cc:513] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.003877 26116 leader_election.cc:290] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:14.004577 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "be4b7e43ed354fe889c83c2c65a05af7" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:14.004757 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "be4b7e43ed354fe889c83c2c65a05af7" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" is_pre_election: true
I20250114 20:59:14.005082 25897 raft_consensus.cc:2463] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:14.005293 25972 raft_consensus.cc:2463] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:14.005955 26009 leader_election.cc:304] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:14.006672 26116 raft_consensus.cc:2798] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:14.006947 26116 raft_consensus.cc:491] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:14.007180 26116 raft_consensus.cc:3054] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.011925 26116 raft_consensus.cc:513] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.013190 26116 leader_election.cc:290] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Requested vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:14.013934 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "be4b7e43ed354fe889c83c2c65a05af7" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:14.014009 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "be4b7e43ed354fe889c83c2c65a05af7" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
I20250114 20:59:14.014448 25897 raft_consensus.cc:3054] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.014535 25972 raft_consensus.cc:3054] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.018604 25897 raft_consensus.cc:2463] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:14.018672 25972 raft_consensus.cc:2463] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:14.019443 26009 leader_election.cc:304] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:14.020083 26116 raft_consensus.cc:2798] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:14.020423 26116 raft_consensus.cc:695] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Becoming Leader. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Running, Role: LEADER
I20250114 20:59:14.021036 26116 consensus_queue.cc:237] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.027621 25794 catalog_manager.cc:5526] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: term changed from 0 to 1, leader changed from <none> to 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:14.080977 20370 test_util.cc:274] Using random seed: -715388743
I20250114 20:59:14.101267 25794 catalog_manager.cc:1909] Servicing CreateTable request from {username='slave'} at 127.0.0.1:42798:
name: "dugtrio"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
  rows: "\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_""\004\001\000\377\377\377\037\004\001\000\376\377\377?\004\001\000\375\377\377_"
  indirect_data: """"
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250114 20:59:14.104054 25794 catalog_manager.cc:6885] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table dugtrio in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250114 20:59:14.131631 25886 tablet_service.cc:1467] Processing CreateTablet for tablet 8579f5f1de554a51ac89b981fe372756 (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:59:14.132241 25960 tablet_service.cc:1467] Processing CreateTablet for tablet 8579f5f1de554a51ac89b981fe372756 (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:59:14.132820 25886 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8579f5f1de554a51ac89b981fe372756. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.133352 25960 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8579f5f1de554a51ac89b981fe372756. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.134348 26035 tablet_service.cc:1467] Processing CreateTablet for tablet 8579f5f1de554a51ac89b981fe372756 (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION VALUES < 536870911
I20250114 20:59:14.135483 26035 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8579f5f1de554a51ac89b981fe372756. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.138851 26036 tablet_service.cc:1467] Processing CreateTablet for tablet 1561b3c6c3dc4e4b92775843e64fba46 (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:59:14.134847 25884 tablet_service.cc:1467] Processing CreateTablet for tablet 1561b3c6c3dc4e4b92775843e64fba46 (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:59:14.140012 26036 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1561b3c6c3dc4e4b92775843e64fba46. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.141976 25884 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1561b3c6c3dc4e4b92775843e64fba46. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.142244 25962 tablet_service.cc:1467] Processing CreateTablet for tablet 1561b3c6c3dc4e4b92775843e64fba46 (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 536870911 <= VALUES < 1073741822
I20250114 20:59:14.143339 25962 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 1561b3c6c3dc4e4b92775843e64fba46. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.143695 25885 tablet_service.cc:1467] Processing CreateTablet for tablet 15234403822b45e79875e8d0987f01ad (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:59:14.144371 25887 tablet_service.cc:1467] Processing CreateTablet for tablet fc2d96adc6d34e3ba9075ad391a1f37a (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:59:14.144810 25885 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 15234403822b45e79875e8d0987f01ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.145570 25887 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fc2d96adc6d34e3ba9075ad391a1f37a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.145313 25961 tablet_service.cc:1467] Processing CreateTablet for tablet 15234403822b45e79875e8d0987f01ad (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:59:14.146410 25961 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 15234403822b45e79875e8d0987f01ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.147367 26034 tablet_service.cc:1467] Processing CreateTablet for tablet 15234403822b45e79875e8d0987f01ad (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 1073741822 <= VALUES < 1610612733
I20250114 20:59:14.148555 25959 tablet_service.cc:1467] Processing CreateTablet for tablet fc2d96adc6d34e3ba9075ad391a1f37a (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:59:14.149699 25959 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fc2d96adc6d34e3ba9075ad391a1f37a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.149552 26033 tablet_service.cc:1467] Processing CreateTablet for tablet fc2d96adc6d34e3ba9075ad391a1f37a (DEFAULT_TABLE table=dugtrio [id=e14b860cd50442dd8e49f707f706ad01]), partition=RANGE (key) PARTITION 1610612733 <= VALUES
I20250114 20:59:14.148578 26034 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 15234403822b45e79875e8d0987f01ad. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.155851 26033 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet fc2d96adc6d34e3ba9075ad391a1f37a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:14.164119 26091 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:14.168407 26098 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:14.170550 26091 tablet_bootstrap.cc:654] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.174062 26098 tablet_bootstrap.cc:654] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.183252 26092 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:14.188750 26092 tablet_bootstrap.cc:654] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.195340 26092 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:14.195827 26092 ts_tablet_manager.cc:1397] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.013s	user 0.009s	sys 0.002s
I20250114 20:59:14.197078 26091 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:14.199013 26091 ts_tablet_manager.cc:1397] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.035s	user 0.014s	sys 0.001s
I20250114 20:59:14.201345 26092 raft_consensus.cc:357] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.201526 26091 raft_consensus.cc:357] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.202071 26092 raft_consensus.cc:383] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.202286 26091 raft_consensus.cc:383] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.202364 26092 raft_consensus.cc:738] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.202816 26091 raft_consensus.cc:738] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.203706 26091 consensus_queue.cc:260] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.205247 26098 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:14.205608 26098 ts_tablet_manager.cc:1397] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.037s	user 0.009s	sys 0.000s
I20250114 20:59:14.207923 26092 consensus_queue.cc:260] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.208570 26091 ts_tablet_manager.cc:1428] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.009s	user 0.003s	sys 0.003s
I20250114 20:59:14.209476 26091 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:14.209887 26098 raft_consensus.cc:357] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.210552 26098 raft_consensus.cc:383] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.210827 26098 raft_consensus.cc:738] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.211462 26098 consensus_queue.cc:260] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.213002 26092 ts_tablet_manager.cc:1428] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.017s	user 0.002s	sys 0.003s
I20250114 20:59:14.213697 26098 ts_tablet_manager.cc:1428] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.008s	user 0.006s	sys 0.000s
I20250114 20:59:14.213922 26092 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:14.214636 26098 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:14.219472 26091 tablet_bootstrap.cc:654] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.220403 26098 tablet_bootstrap.cc:654] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.220604 26092 tablet_bootstrap.cc:654] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.231595 26098 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:14.232095 26098 ts_tablet_manager.cc:1397] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.018s	user 0.008s	sys 0.009s
I20250114 20:59:14.234828 26098 raft_consensus.cc:357] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.235652 26098 raft_consensus.cc:383] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.235940 26098 raft_consensus.cc:738] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.236608 26098 consensus_queue.cc:260] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.238946 26098 ts_tablet_manager.cc:1428] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.007s	user 0.000s	sys 0.004s
I20250114 20:59:14.246155 26098 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:14.252485 26098 tablet_bootstrap.cc:654] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.254765 26092 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:14.255038 26091 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:14.255225 26092 ts_tablet_manager.cc:1397] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.041s	user 0.021s	sys 0.017s
I20250114 20:59:14.255573 26091 ts_tablet_manager.cc:1397] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.046s	user 0.008s	sys 0.030s
I20250114 20:59:14.258474 26092 raft_consensus.cc:357] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.259282 26092 raft_consensus.cc:383] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.259627 26092 raft_consensus.cc:738] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.259745 26091 raft_consensus.cc:357] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.260505 26091 raft_consensus.cc:383] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.260906 26091 raft_consensus.cc:738] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.260521 26092 consensus_queue.cc:260] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.261474 26091 consensus_queue.cc:260] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.263172 26092 ts_tablet_manager.cc:1428] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.007s	user 0.001s	sys 0.004s
I20250114 20:59:14.264184 26092 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:14.265429 26091 ts_tablet_manager.cc:1428] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.009s	user 0.005s	sys 0.000s
I20250114 20:59:14.267665 26091 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:14.268805 26098 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:14.269275 26098 ts_tablet_manager.cc:1397] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.023s	user 0.019s	sys 0.000s
I20250114 20:59:14.271667 26098 raft_consensus.cc:357] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.272352 26098 raft_consensus.cc:383] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.272675 26098 raft_consensus.cc:738] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.273161 26091 tablet_bootstrap.cc:654] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.273181 26092 tablet_bootstrap.cc:654] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.273279 26098 consensus_queue.cc:260] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.278007 26098 ts_tablet_manager.cc:1428] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.008s	user 0.000s	sys 0.006s
I20250114 20:59:14.279106 26098 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:14.284977 26098 tablet_bootstrap.cc:654] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Neither blocks nor log segments found. Creating new log.
W20250114 20:59:14.287912 25843 auto_rebalancer.cc:227] Could not retrieve cluster info: Service unavailable: Tablet not running
I20250114 20:59:14.306438 26092 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:14.306440 26098 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: No bootstrap required, opened a new log
I20250114 20:59:14.307117 26098 ts_tablet_manager.cc:1397] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.028s	user 0.019s	sys 0.008s
I20250114 20:59:14.307103 26092 ts_tablet_manager.cc:1397] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.043s	user 0.006s	sys 0.024s
I20250114 20:59:14.307417 26091 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:14.307936 26091 ts_tablet_manager.cc:1397] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.041s	user 0.009s	sys 0.025s
I20250114 20:59:14.309300 26098 raft_consensus.cc:357] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.309787 26098 raft_consensus.cc:383] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.309980 26098 raft_consensus.cc:738] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.309939 26092 raft_consensus.cc:357] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.310295 26091 raft_consensus.cc:357] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.310866 26092 raft_consensus.cc:383] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.310703 26098 consensus_queue.cc:260] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.310994 26091 raft_consensus.cc:383] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.311203 26092 raft_consensus.cc:738] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.311398 26091 raft_consensus.cc:738] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.311995 26092 consensus_queue.cc:260] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.312141 26091 consensus_queue.cc:260] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.312639 26098 ts_tablet_manager.cc:1428] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:14.316229 26091 ts_tablet_manager.cc:1428] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.008s	user 0.003s	sys 0.003s
I20250114 20:59:14.317147 26091 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Bootstrap starting.
I20250114 20:59:14.320593 26092 ts_tablet_manager.cc:1428] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.013s	user 0.004s	sys 0.001s
I20250114 20:59:14.325074 26092 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Bootstrap starting.
I20250114 20:59:14.325304 26091 tablet_bootstrap.cc:654] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.331156 26092 tablet_bootstrap.cc:654] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Neither blocks nor log segments found. Creating new log.
I20250114 20:59:14.331435 26091 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: No bootstrap required, opened a new log
I20250114 20:59:14.331894 26091 ts_tablet_manager.cc:1397] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Time spent bootstrapping tablet: real 0.015s	user 0.007s	sys 0.004s
I20250114 20:59:14.334082 26091 raft_consensus.cc:357] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.334642 26091 raft_consensus.cc:383] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.334892 26091 raft_consensus.cc:738] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.335496 26092 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: No bootstrap required, opened a new log
I20250114 20:59:14.335454 26091 consensus_queue.cc:260] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.335987 26092 ts_tablet_manager.cc:1397] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Time spent bootstrapping tablet: real 0.011s	user 0.010s	sys 0.000s
I20250114 20:59:14.337468 26091 ts_tablet_manager.cc:1428] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:14.338056 26097 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:14.338186 26092 raft_consensus.cc:357] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.338717 26092 raft_consensus.cc:383] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250114 20:59:14.338922 26092 raft_consensus.cc:738] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Initialized, Role: FOLLOWER
I20250114 20:59:14.339442 26092 consensus_queue.cc:260] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.341403 26092 ts_tablet_manager.cc:1428] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Time spent starting tablet: real 0.005s	user 0.005s	sys 0.000s
I20250114 20:59:14.350805 26105 raft_consensus.cc:491] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:14.351195 26105 raft_consensus.cc:513] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.353165 26105 leader_election.cc:290] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905), 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:14.353466 26114 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:59:14.377552 26046 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1561b3c6c3dc4e4b92775843e64fba46" candidate_uuid: "83d3e5ddd1984a80a46be43f24be46b2" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "887b7f1840654ca4a8278f3aa3eba169" is_pre_election: true
I20250114 20:59:14.378103 26046 raft_consensus.cc:2463] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 83d3e5ddd1984a80a46be43f24be46b2 in term 0.
I20250114 20:59:14.379009 25861 leader_election.cc:304] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:14.379803 26105 raft_consensus.cc:2798] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:14.380153 26105 raft_consensus.cc:491] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:14.380474 26105 raft_consensus.cc:3054] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.380928 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1561b3c6c3dc4e4b92775843e64fba46" candidate_uuid: "83d3e5ddd1984a80a46be43f24be46b2" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" is_pre_election: true
I20250114 20:59:14.381500 25972 raft_consensus.cc:2463] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 83d3e5ddd1984a80a46be43f24be46b2 in term 0.
I20250114 20:59:14.385900 26105 raft_consensus.cc:513] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.387559 26105 leader_election.cc:290] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [CANDIDATE]: Term 1 election: Requested vote from peers dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905), 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:14.388274 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1561b3c6c3dc4e4b92775843e64fba46" candidate_uuid: "83d3e5ddd1984a80a46be43f24be46b2" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
I20250114 20:59:14.389191 25972 raft_consensus.cc:3054] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.388661 26046 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "1561b3c6c3dc4e4b92775843e64fba46" candidate_uuid: "83d3e5ddd1984a80a46be43f24be46b2" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "887b7f1840654ca4a8278f3aa3eba169"
I20250114 20:59:14.392302 26046 raft_consensus.cc:3054] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.394462 25972 raft_consensus.cc:2463] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 83d3e5ddd1984a80a46be43f24be46b2 in term 1.
I20250114 20:59:14.395421 25862 leader_election.cc:304] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:14.396078 26105 raft_consensus.cc:2798] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:14.396975 26046 raft_consensus.cc:2463] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 83d3e5ddd1984a80a46be43f24be46b2 in term 1.
I20250114 20:59:14.396981 26105 raft_consensus.cc:695] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 LEADER]: Becoming Leader. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Running, Role: LEADER
I20250114 20:59:14.398149 26105 consensus_queue.cc:237] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.400460 26116 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:14.406019 25795 catalog_manager.cc:5526] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 reported cstate change: term changed from 0 to 1, leader changed from <none> to 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129). New cstate: current_term: 1 leader_uuid: "83d3e5ddd1984a80a46be43f24be46b2" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:14.410995 26117 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:59:14.435307 26117 consensus_queue.cc:1035] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:14.444465 26117 consensus_queue.cc:1035] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250114 20:59:14.458914 26117 raft_consensus.cc:491] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:14.459282 26117 raft_consensus.cc:513] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.460932 26117 leader_election.cc:290] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:14.461524 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "fc2d96adc6d34e3ba9075ad391a1f37a" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:14.461830 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "fc2d96adc6d34e3ba9075ad391a1f37a" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" is_pre_election: true
I20250114 20:59:14.462045 25897 raft_consensus.cc:2463] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:14.462365 25972 raft_consensus.cc:2463] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:14.462865 26009 leader_election.cc:304] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:14.463588 26117 raft_consensus.cc:2798] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:14.463892 26117 raft_consensus.cc:491] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:14.464123 26117 raft_consensus.cc:3054] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.468765 26117 raft_consensus.cc:513] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.470444 26117 leader_election.cc:290] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Requested vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:14.470517 26116 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:14.471338 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "fc2d96adc6d34e3ba9075ad391a1f37a" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:14.471899 25897 raft_consensus.cc:3054] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.472188 25972 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "fc2d96adc6d34e3ba9075ad391a1f37a" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
I20250114 20:59:14.472666 25972 raft_consensus.cc:3054] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.479581 26097 raft_consensus.cc:491] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:14.480041 26097 raft_consensus.cc:513] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.481325 26116 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:14.481834 26097 leader_election.cc:290] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:14.482584 25895 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "8579f5f1de554a51ac89b981fe372756" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:14.483183 26046 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "8579f5f1de554a51ac89b981fe372756" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "887b7f1840654ca4a8278f3aa3eba169" is_pre_election: true
I20250114 20:59:14.483527 25895 raft_consensus.cc:2463] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 0.
I20250114 20:59:14.483799 26046 raft_consensus.cc:2463] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 0.
I20250114 20:59:14.484717 25935 leader_election.cc:304] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:14.479082 25972 raft_consensus.cc:2463] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:14.485991 26114 raft_consensus.cc:2798] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:14.486336 26114 raft_consensus.cc:491] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:14.486701 25897 raft_consensus.cc:2463] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:14.486690 26011 leader_election.cc:304] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 887b7f1840654ca4a8278f3aa3eba169, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:14.487788 26117 raft_consensus.cc:2798] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:14.486707 26114 raft_consensus.cc:3054] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.488289 26117 raft_consensus.cc:695] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Becoming Leader. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Running, Role: LEADER
I20250114 20:59:14.489065 26117 consensus_queue.cc:237] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.494546 26114 raft_consensus.cc:513] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.497242 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "8579f5f1de554a51ac89b981fe372756" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:14.497845 25897 raft_consensus.cc:3054] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.502089 26114 leader_election.cc:290] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 election: Requested vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:14.502341 26046 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "8579f5f1de554a51ac89b981fe372756" candidate_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "887b7f1840654ca4a8278f3aa3eba169"
I20250114 20:59:14.502940 26046 raft_consensus.cc:3054] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.504575 25897 raft_consensus.cc:2463] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 1.
I20250114 20:59:14.505663 25935 leader_election.cc:304] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, dd425b8989c641ecba6b4b51af3f9f5c; no voters: 
I20250114 20:59:14.506479 26114 raft_consensus.cc:2798] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:14.506906 26114 raft_consensus.cc:695] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Becoming Leader. State: Replica: dd425b8989c641ecba6b4b51af3f9f5c, State: Running, Role: LEADER
I20250114 20:59:14.507637 26114 consensus_queue.cc:237] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.509135 26046 raft_consensus.cc:2463] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate dd425b8989c641ecba6b4b51af3f9f5c in term 1.
I20250114 20:59:14.512459 25793 catalog_manager.cc:5526] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: term changed from 0 to 1, leader changed from <none> to 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:14.515828 25792 catalog_manager.cc:5526] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: term changed from 0 to 1, leader changed from <none> to dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130). New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:14.598800 26117 raft_consensus.cc:491] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250114 20:59:14.599241 26117 raft_consensus.cc:513] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.600780 26117 leader_election.cc:290] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:14.601511 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "15234403822b45e79875e8d0987f01ad" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2" is_pre_election: true
I20250114 20:59:14.601773 25971 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "15234403822b45e79875e8d0987f01ad" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" is_pre_election: true
I20250114 20:59:14.602306 25897 raft_consensus.cc:2463] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:14.602339 25971 raft_consensus.cc:2463] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 0.
I20250114 20:59:14.603387 26009 leader_election.cc:304] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:14.604027 26117 raft_consensus.cc:2798] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250114 20:59:14.604298 26117 raft_consensus.cc:491] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250114 20:59:14.604535 26117 raft_consensus.cc:3054] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.608846 26117 raft_consensus.cc:513] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.610266 26117 leader_election.cc:290] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Requested vote from peers 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683), dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:14.611088 25897 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "15234403822b45e79875e8d0987f01ad" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "83d3e5ddd1984a80a46be43f24be46b2"
I20250114 20:59:14.611174 25971 tablet_service.cc:1812] Received RequestConsensusVote() RPC: tablet_id: "15234403822b45e79875e8d0987f01ad" candidate_uuid: "887b7f1840654ca4a8278f3aa3eba169" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "dd425b8989c641ecba6b4b51af3f9f5c"
I20250114 20:59:14.611581 25897 raft_consensus.cc:3054] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.611701 25971 raft_consensus.cc:3054] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 0 FOLLOWER]: Advancing to term 1
I20250114 20:59:14.615939 25897 raft_consensus.cc:2463] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:14.616057 25971 raft_consensus.cc:2463] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 887b7f1840654ca4a8278f3aa3eba169 in term 1.
I20250114 20:59:14.616803 26009 leader_election.cc:304] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 83d3e5ddd1984a80a46be43f24be46b2, 887b7f1840654ca4a8278f3aa3eba169; no voters: 
I20250114 20:59:14.617444 26117 raft_consensus.cc:2798] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Leader election won for term 1
I20250114 20:59:14.617800 26117 raft_consensus.cc:695] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Becoming Leader. State: Replica: 887b7f1840654ca4a8278f3aa3eba169, State: Running, Role: LEADER
I20250114 20:59:14.618468 26117 consensus_queue.cc:237] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:14.624513 25792 catalog_manager.cc:5526] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: term changed from 0 to 1, leader changed from <none> to 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131). New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:14.742180 20370 tablet_server.cc:178] TabletServer@127.19.228.129:0 shutting down...
I20250114 20:59:14.757143 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:14.757807 20370 tablet_replica.cc:331] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.758383 20370 raft_consensus.cc:2238] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:14.758919 20370 pending_rounds.cc:62] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Trying to abort 1 pending ops.
I20250114 20:59:14.759096 20370 pending_rounds.cc:69] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294699622473728 op_type: NO_OP noop_request { }
I20250114 20:59:14.759390 20370 raft_consensus.cc:2883] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 LEADER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:59:14.759665 20370 raft_consensus.cc:2267] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.761662 20370 tablet_replica.cc:331] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.762151 20370 raft_consensus.cc:2238] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.762477 20370 raft_consensus.cc:2267] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.764122 20370 tablet_replica.cc:331] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.764590 20370 raft_consensus.cc:2238] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.764930 20370 raft_consensus.cc:2267] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.766716 20370 tablet_replica.cc:331] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.767177 20370 raft_consensus.cc:2238] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.767518 20370 raft_consensus.cc:2267] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.769483 20370 tablet_replica.cc:331] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.769950 20370 raft_consensus.cc:2238] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.770262 20370 raft_consensus.cc:2267] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.771737 20370 tablet_replica.cc:331] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.772189 20370 raft_consensus.cc:2238] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.772523 20370 raft_consensus.cc:2267] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.774281 20370 tablet_replica.cc:331] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.774724 20370 raft_consensus.cc:2238] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.775030 20370 raft_consensus.cc:2267] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.776469 20370 tablet_replica.cc:331] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: stopping tablet replica
I20250114 20:59:14.776872 20370 raft_consensus.cc:2238] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:14.777194 20370 raft_consensus.cc:2267] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:14.801600 20370 tablet_server.cc:195] TabletServer@127.19.228.129:0 shutdown complete.
W20250114 20:59:14.875921 25935 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111) [suppressed 13 similar messages]
W20250114 20:59:14.878913 25935 consensus_peers.cc:487] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:59:14.934661 26009 consensus_peers.cc:487] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:59:14.945554 26009 consensus_peers.cc:487] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:59:14.975090 26009 consensus_peers.cc:487] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:59:15.020434 25935 consensus_peers.cc:487] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:15.032794 26117 consensus_queue.cc:1035] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250114 20:59:15.034508 26009 consensus_peers.cc:487] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250114 20:59:15.036377 26009 consensus_peers.cc:487] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:15.061753 26114 consensus_queue.cc:1035] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:15.100405 26117 consensus_queue.cc:1035] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250114 20:59:15.492491 26117 consensus_queue.cc:579] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.039s)
W20250114 20:59:15.499729 25792 catalog_manager.cc:5204] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet be4b7e43ed354fe889c83c2c65a05af7 with cas_config_opid_index -1: no extra replica candidate found for tablet be4b7e43ed354fe889c83c2c65a05af7 (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (3) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
I20250114 20:59:15.518005 26117 consensus_queue.cc:579] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.103s)
I20250114 20:59:15.529685 26116 consensus_queue.cc:579] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.047s)
I20250114 20:59:15.531375 26097 consensus_queue.cc:579] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.022s)
I20250114 20:59:15.651293 26116 consensus_queue.cc:579] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.032s)
I20250114 20:59:15.715508 26097 consensus_queue.cc:579] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.341s)
I20250114 20:59:15.833425 25793 catalog_manager.cc:2462] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:42796:
table { table_name: "dugtrio" } modify_external_catalogs: true
I20250114 20:59:15.834092 25793 catalog_manager.cc:2646] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:42796:
table { table_name: "dugtrio" } modify_external_catalogs: true
I20250114 20:59:15.851374 25793 catalog_manager.cc:5813] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Sending DeleteTablet for 3 replicas of tablet 8579f5f1de554a51ac89b981fe372756
I20250114 20:59:15.852818 25793 catalog_manager.cc:5813] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Sending DeleteTablet for 3 replicas of tablet 1561b3c6c3dc4e4b92775843e64fba46
I20250114 20:59:15.854192 25959 tablet_service.cc:1514] Processing DeleteTablet for tablet 8579f5f1de554a51ac89b981fe372756 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:38918
I20250114 20:59:15.855118 25962 tablet_service.cc:1514] Processing DeleteTablet for tablet 1561b3c6c3dc4e4b92775843e64fba46 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:38918
I20250114 20:59:15.855301 25793 catalog_manager.cc:5813] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Sending DeleteTablet for 3 replicas of tablet 15234403822b45e79875e8d0987f01ad
I20250114 20:59:15.855974 26033 tablet_service.cc:1514] Processing DeleteTablet for tablet 8579f5f1de554a51ac89b981fe372756 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:56356
I20250114 20:59:15.857430 25962 tablet_service.cc:1514] Processing DeleteTablet for tablet 15234403822b45e79875e8d0987f01ad with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:38918
I20250114 20:59:15.858310 26036 tablet_service.cc:1514] Processing DeleteTablet for tablet 1561b3c6c3dc4e4b92775843e64fba46 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:56356
I20250114 20:59:15.858448 25793 catalog_manager.cc:5813] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: Sending DeleteTablet for 3 replicas of tablet fc2d96adc6d34e3ba9075ad391a1f37a
I20250114 20:59:15.860011 26036 tablet_service.cc:1514] Processing DeleteTablet for tablet 15234403822b45e79875e8d0987f01ad with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:56356
I20250114 20:59:15.860519 25962 tablet_service.cc:1514] Processing DeleteTablet for tablet fc2d96adc6d34e3ba9075ad391a1f37a with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:38918
I20250114 20:59:15.862540 26036 tablet_service.cc:1514] Processing DeleteTablet for tablet fc2d96adc6d34e3ba9075ad391a1f37a with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:56356
I20250114 20:59:15.862980 26176 tablet_replica.cc:331] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:15.863332 26174 tablet_replica.cc:331] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:15.864634 26174 raft_consensus.cc:2238] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:15.866365 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
I20250114 20:59:15.866300 26174 raft_consensus.cc:2267] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.867194 26176 raft_consensus.cc:2238] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:15.868273 26176 raft_consensus.cc:2267] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
W20250114 20:59:15.868221 25779 catalog_manager.cc:4615] TS 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): DeleteTablet:TABLET_DATA_DELETED RPC failed for tablet 1561b3c6c3dc4e4b92775843e64fba46: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111)
I20250114 20:59:15.871762 26174 ts_tablet_manager.cc:1905] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.873083 26176 ts_tablet_manager.cc:1905] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Deleting tablet data with delete state TABLET_DATA_DELETED
W20250114 20:59:15.880825 26177 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:15.881376 26178 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:15.891896 26180 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:15.892529 26176 ts_tablet_manager.cc:1918] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:15.892530 26174 ts_tablet_manager.cc:1918] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:15.892879 20370 server_base.cc:1034] running on GCE node
I20250114 20:59:15.893203 26176 log.cc:1198] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/wals/8579f5f1de554a51ac89b981fe372756
I20250114 20:59:15.893369 26174 log.cc:1198] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/wals/8579f5f1de554a51ac89b981fe372756
I20250114 20:59:15.894222 26176 ts_tablet_manager.cc:1939] T 8579f5f1de554a51ac89b981fe372756 P 887b7f1840654ca4a8278f3aa3eba169: Deleting consensus metadata
I20250114 20:59:15.894500 26174 ts_tablet_manager.cc:1939] T 8579f5f1de554a51ac89b981fe372756 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting consensus metadata
I20250114 20:59:15.894750 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:15.895042 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:15.895273 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888355895253 us; error 0 us; skew 500 ppm
I20250114 20:59:15.896117 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:15.898319 25780 catalog_manager.cc:4872] TS 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835): tablet 8579f5f1de554a51ac89b981fe372756 (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.898545 26176 tablet_replica.cc:331] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:15.899066 26174 tablet_replica.cc:331] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:15.899279 25781 catalog_manager.cc:4872] TS dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): tablet 8579f5f1de554a51ac89b981fe372756 (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.899804 26176 raft_consensus.cc:2238] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:15.900065 26174 raft_consensus.cc:2238] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:15.900574 26176 raft_consensus.cc:2267] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.900822 26174 raft_consensus.cc:2267] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.901288 20370 webserver.cc:458] Webserver started at http://127.19.228.132:44053/ using document root <none> and password file <none>
I20250114 20:59:15.901852 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:15.902096 20370 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250114 20:59:15.902447 20370 server_base.cc:882] This appears to be a new deployment of Kudu; creating new FS layout
I20250114 20:59:15.904011 20370 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root/instance:
uuid: "56a9a548d4964c5e8801aaa7fe1e70cc"
format_stamp: "Formatted at 2025-01-14 20:59:15 on dist-test-slave-kc3q"
I20250114 20:59:15.904076 26174 ts_tablet_manager.cc:1905] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.904227 26176 ts_tablet_manager.cc:1905] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.910401 20370 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.001s	sys 0.003s
I20250114 20:59:15.915109 26185 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:15.915916 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250114 20:59:15.916175 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root
uuid: "56a9a548d4964c5e8801aaa7fe1e70cc"
format_stamp: "Formatted at 2025-01-14 20:59:15 on dist-test-slave-kc3q"
I20250114 20:59:15.916438 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:15.919023 26174 ts_tablet_manager.cc:1918] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 0.0
I20250114 20:59:15.919023 26176 ts_tablet_manager.cc:1918] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 0.0
I20250114 20:59:15.919634 26174 log.cc:1198] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/wals/1561b3c6c3dc4e4b92775843e64fba46
I20250114 20:59:15.919637 26176 log.cc:1198] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/wals/1561b3c6c3dc4e4b92775843e64fba46
I20250114 20:59:15.920423 26176 ts_tablet_manager.cc:1939] T 1561b3c6c3dc4e4b92775843e64fba46 P 887b7f1840654ca4a8278f3aa3eba169: Deleting consensus metadata
I20250114 20:59:15.920426 26174 ts_tablet_manager.cc:1939] T 1561b3c6c3dc4e4b92775843e64fba46 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting consensus metadata
I20250114 20:59:15.923823 26174 tablet_replica.cc:331] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:15.923902 25780 catalog_manager.cc:4872] TS 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835): tablet 1561b3c6c3dc4e4b92775843e64fba46 (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.924747 26174 raft_consensus.cc:2238] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:15.924875 26176 tablet_replica.cc:331] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:15.925396 26174 raft_consensus.cc:2267] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.925735 26176 raft_consensus.cc:2238] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:15.926482 25781 catalog_manager.cc:4872] TS dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): tablet 1561b3c6c3dc4e4b92775843e64fba46 (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.926884 26176 raft_consensus.cc:2267] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.930353 26176 ts_tablet_manager.cc:1905] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.930363 26174 ts_tablet_manager.cc:1905] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.934373 26116 consensus_queue.cc:579] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Leader has been unable to successfully communicate with peer 83d3e5ddd1984a80a46be43f24be46b2 for more than 1 seconds (1.444s)
I20250114 20:59:15.942521 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:15.944259 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:15.945254 26176 ts_tablet_manager.cc:1918] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:15.945660 26176 log.cc:1198] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/wals/15234403822b45e79875e8d0987f01ad
I20250114 20:59:15.945952 26174 ts_tablet_manager.cc:1918] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:15.946312 26174 log.cc:1198] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/wals/15234403822b45e79875e8d0987f01ad
I20250114 20:59:15.946377 26176 ts_tablet_manager.cc:1939] T 15234403822b45e79875e8d0987f01ad P 887b7f1840654ca4a8278f3aa3eba169: Deleting consensus metadata
I20250114 20:59:15.947057 26174 ts_tablet_manager.cc:1939] T 15234403822b45e79875e8d0987f01ad P dd425b8989c641ecba6b4b51af3f9f5c: Deleting consensus metadata
I20250114 20:59:15.947096 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:15.949990 25780 catalog_manager.cc:4872] TS 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835): tablet 15234403822b45e79875e8d0987f01ad (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.950343 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250114 20:59:15.950647 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:15.950829 26176 tablet_replica.cc:331] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:15.950666 25781 catalog_manager.cc:4872] TS dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): tablet 15234403822b45e79875e8d0987f01ad (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.951032 20370 ts_tablet_manager.cc:610] Registered 0 tablets
I20250114 20:59:15.951040 26174 tablet_replica.cc:331] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:15.951522 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:15.952221 26176 raft_consensus.cc:2238] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:15.952237 26174 raft_consensus.cc:2238] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:15.953099 26174 raft_consensus.cc:2267] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.953580 26176 raft_consensus.cc:2267] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:15.956619 26174 ts_tablet_manager.cc:1905] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.957343 26176 ts_tablet_manager.cc:1905] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:15.970811 26174 ts_tablet_manager.cc:1918] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:15.971228 26174 log.cc:1198] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/wals/fc2d96adc6d34e3ba9075ad391a1f37a
I20250114 20:59:15.971974 26174 ts_tablet_manager.cc:1939] T fc2d96adc6d34e3ba9075ad391a1f37a P dd425b8989c641ecba6b4b51af3f9f5c: Deleting consensus metadata
I20250114 20:59:15.975579 25781 catalog_manager.cc:4872] TS dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): tablet fc2d96adc6d34e3ba9075ad391a1f37a (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:15.977867 26176 ts_tablet_manager.cc:1918] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:15.978252 26176 log.cc:1198] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/wals/fc2d96adc6d34e3ba9075ad391a1f37a
I20250114 20:59:15.978974 26176 ts_tablet_manager.cc:1939] T fc2d96adc6d34e3ba9075ad391a1f37a P 887b7f1840654ca4a8278f3aa3eba169: Deleting consensus metadata
I20250114 20:59:15.981612 25780 catalog_manager.cc:4872] TS 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835): tablet fc2d96adc6d34e3ba9075ad391a1f37a (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:16.004669 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.132:37439
I20250114 20:59:16.004765 26247 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.132:37439 every 8 connection(s)
I20250114 20:59:16.009577 20370 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250114 20:59:16.017004 26251 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250114 20:59:16.017994 26252 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:16.020079 26248 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33283
I20250114 20:59:16.020504 26248 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:16.021504 26248 heartbeater.cc:510] Master 127.19.228.190:33283 requested a full tablet report, sending...
W20250114 20:59:16.022153 26254 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250114 20:59:16.023234 20370 server_base.cc:1034] running on GCE node
I20250114 20:59:16.023809 25793 ts_manager.cc:194] Registered new tserver with Master: 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439)
I20250114 20:59:16.024174 20370 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250114 20:59:16.024499 20370 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250114 20:59:16.024694 20370 hybrid_clock.cc:648] HybridClock initialized: now 1736888356024676 us; error 0 us; skew 500 ppm
I20250114 20:59:16.025246 20370 server_base.cc:834] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250114 20:59:16.025306 25793 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:42820
I20250114 20:59:16.028306 20370 webserver.cc:458] Webserver started at http://127.19.228.129:35185/ using document root <none> and password file <none>
I20250114 20:59:16.028760 20370 fs_manager.cc:362] Metadata directory not provided
I20250114 20:59:16.028928 20370 fs_manager.cc:365] Using existing metadata directory in first data directory
I20250114 20:59:16.032444 20370 fs_manager.cc:714] Time spent opening directory manager: real 0.003s	user 0.004s	sys 0.000s
I20250114 20:59:16.035967 26259 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250114 20:59:16.036650 20370 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.002s	sys 0.001s
I20250114 20:59:16.036927 20370 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root
uuid: "83d3e5ddd1984a80a46be43f24be46b2"
format_stamp: "Formatted at 2025-01-14 20:59:12 on dist-test-slave-kc3q"
I20250114 20:59:16.037199 20370 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root
metadata directory: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root
1 data directories: /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250114 20:59:16.051155 25971 consensus_queue.cc:237] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.052618 20370 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250114 20:59:16.054258 20370 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250114 20:59:16.058868 26046 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:59:16.060734 26114 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250114 20:59:16.064174 20370 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250114 20:59:16.078742 26114 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
W20250114 20:59:16.080940 25935 consensus_peers.cc:487] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.081809 26045 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.093623 26270 ts_tablet_manager.cc:542] Loading tablet metadata (0/8 complete)
I20250114 20:59:16.094249 25781 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 5f8e59274e0e4d6782564cd3da8080db with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 5)
W20250114 20:59:16.094522 25935 consensus_peers.cc:487] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 5f8e59274e0e4d6782564cd3da8080db. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.096644 25793 catalog_manager.cc:5526] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:16.113399 26045 consensus_queue.cc:237] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.119181 25971 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:59:16.120502 26095 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
W20250114 20:59:16.129348 26009 consensus_peers.cc:487] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.130973 26045 consensus_queue.cc:237] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
W20250114 20:59:16.133742 26009 consensus_peers.cc:487] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 070a203b588b46e58804064fe5f00952. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.138917 25971 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250114 20:59:16.138629 26095 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.140434 25972 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
W20250114 20:59:16.141528 26009 consensus_peers.cc:487] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: ee053a9ccc5448e492c8e9ad440cfeb1. This is attempt 1: this message will repeat every 5th retry.
W20250114 20:59:16.144371 26009 consensus_peers.cc:487] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.145658 26116 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:59:16.149111 25780 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 070a203b588b46e58804064fe5f00952 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 6)
I20250114 20:59:16.151772 20370 ts_tablet_manager.cc:579] Loaded tablet metadata (8 total tablets, 8 live tablets)
I20250114 20:59:16.152103 20370 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.067s	user 0.001s	sys 0.009s
I20250114 20:59:16.152413 20370 ts_tablet_manager.cc:594] Registering tablets (0/8 complete)
I20250114 20:59:16.153226 25792 catalog_manager.cc:5526] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:16.159121 26270 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.165637 26116 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.167657 26045 consensus_queue.cc:237] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.172994 25972 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.185413 25792 catalog_manager.cc:5526] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.187449 25780 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet ee053a9ccc5448e492c8e9ad440cfeb1 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 6)
I20250114 20:59:16.197927 20370 ts_tablet_manager.cc:610] Registered 8 tablets
I20250114 20:59:16.198243 20370 ts_tablet_manager.cc:589] Time spent register tablets: real 0.046s	user 0.044s	sys 0.000s
W20250114 20:59:16.208176 26009 consensus_peers.cc:487] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: be4b7e43ed354fe889c83c2c65a05af7. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.212604 25972 raft_consensus.cc:1270] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
W20250114 20:59:16.213416 26009 consensus_peers.cc:487] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:16.213966 26116 consensus_queue.cc:1035] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250114 20:59:16.215181 26270 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=1 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.216156 26270 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.216890 26270 ts_tablet_manager.cc:1397] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.058s	user 0.032s	sys 0.023s
I20250114 20:59:16.219743 26270 raft_consensus.cc:357] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.220511 26270 raft_consensus.cc:738] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.221341 26270 consensus_queue.cc:260] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.230304 26270 ts_tablet_manager.cc:1428] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.013s	user 0.010s	sys 0.003s
I20250114 20:59:16.231402 26116 raft_consensus.cc:2949] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.234215 26270 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.235879 25972 raft_consensus.cc:2949] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.242528 25780 catalog_manager.cc:5039] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet be4b7e43ed354fe889c83c2c65a05af7 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 6)
I20250114 20:59:16.252615 26270 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.252377 25793 catalog_manager.cc:5526] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:16.253531 26270 tablet_bootstrap.cc:492] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.254338 26270 ts_tablet_manager.cc:1397] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.020s	user 0.011s	sys 0.008s
I20250114 20:59:16.256716 26270 raft_consensus.cc:357] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.257288 26270 raft_consensus.cc:738] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.257853 26270 consensus_queue.cc:260] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.260037 26270 ts_tablet_manager.cc:1428] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.005s	user 0.005s	sys 0.000s
I20250114 20:59:16.260780 26270 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.278018 26270 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.278692 26270 tablet_bootstrap.cc:492] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.279273 26270 ts_tablet_manager.cc:1397] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.019s	user 0.012s	sys 0.003s
I20250114 20:59:16.281176 26270 raft_consensus.cc:357] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.281714 26270 raft_consensus.cc:738] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.282289 26270 consensus_queue.cc:260] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.283998 26270 ts_tablet_manager.cc:1428] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.007s	sys 0.000s
I20250114 20:59:16.284584 26270 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.298596 26270 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 1 replicates
I20250114 20:59:16.299296 26270 tablet_bootstrap.cc:492] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.299787 26270 ts_tablet_manager.cc:1397] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.015s	user 0.011s	sys 0.004s
I20250114 20:59:16.301383 26270 raft_consensus.cc:357] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 1 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.301980 26270 raft_consensus.cc:738] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.302542 26270 consensus_queue.cc:260] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.304229 26270 ts_tablet_manager.cc:1428] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:59:16.304823 26270 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
W20250114 20:59:16.306555 25843 auto_rebalancer.cc:227] Could not retrieve cluster info: Not found: tserver 83d3e5ddd1984a80a46be43f24be46b2 not available for placement
I20250114 20:59:16.309834 20370 rpc_server.cc:307] RPC server started. Bound to: 127.19.228.129:44683
I20250114 20:59:16.310012 26338 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.19.228.129:44683 every 8 connection(s)
I20250114 20:59:16.316931 26270 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=1 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.317798 26270 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.318301 26270 ts_tablet_manager.cc:1397] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.014s	user 0.012s	sys 0.000s
I20250114 20:59:16.319968 26270 raft_consensus.cc:357] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.320443 26270 raft_consensus.cc:738] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.320917 26270 consensus_queue.cc:260] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.322445 26270 ts_tablet_manager.cc:1428] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.002s	sys 0.000s
I20250114 20:59:16.322983 26270 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.324640 26339 heartbeater.cc:346] Connected to a master server at 127.19.228.190:33283
I20250114 20:59:16.324954 26339 heartbeater.cc:463] Registering TS with master...
I20250114 20:59:16.325609 26339 heartbeater.cc:510] Master 127.19.228.190:33283 requested a full tablet report, sending...
I20250114 20:59:16.329816 25792 ts_manager.cc:194] Re-registered known tserver with Master: 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683)
I20250114 20:59:16.333385 26270 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=1 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.334017 26270 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.334443 26270 ts_tablet_manager.cc:1397] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.012s	user 0.008s	sys 0.000s
I20250114 20:59:16.336091 26270 raft_consensus.cc:357] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.336544 26270 raft_consensus.cc:738] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.336864 25792 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.0.1:42824
I20250114 20:59:16.337038 26270 consensus_queue.cc:260] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.338759 26270 ts_tablet_manager.cc:1428] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:59:16.339336 26270 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.339498 26339 heartbeater.cc:502] Master 127.19.228.190:33283 was elected leader, sending a full tablet report...
I20250114 20:59:16.349632 26270 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=1 overwritten=0 applied=1 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.350381 26270 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.350863 26270 ts_tablet_manager.cc:1397] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.012s	user 0.008s	sys 0.000s
I20250114 20:59:16.352449 26270 raft_consensus.cc:357] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:16.352895 26270 raft_consensus.cc:738] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.353380 26270 consensus_queue.cc:260] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:16.354753 26270 ts_tablet_manager.cc:1428] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:59:16.355316 26270 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap starting.
I20250114 20:59:16.364176 26270 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap replayed 1/1 log segments. Stats: ops{read=0 overwritten=0 applied=0 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.364696 26270 tablet_bootstrap.cc:492] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Bootstrap complete.
I20250114 20:59:16.365128 26270 ts_tablet_manager.cc:1397] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Time spent bootstrapping tablet: real 0.010s	user 0.008s	sys 0.000s
I20250114 20:59:16.366648 26270 raft_consensus.cc:357] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.367091 26270 raft_consensus.cc:738] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 83d3e5ddd1984a80a46be43f24be46b2, State: Initialized, Role: FOLLOWER
I20250114 20:59:16.367528 26270 consensus_queue.cc:260] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:16.368916 26270 ts_tablet_manager.cc:1428] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:59:16.491851 26303 tablet_service.cc:1514] Processing DeleteTablet for tablet 1561b3c6c3dc4e4b92775843e64fba46 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:55304
I20250114 20:59:16.493296 26347 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:16.493841 26303 tablet_service.cc:1514] Processing DeleteTablet for tablet 8579f5f1de554a51ac89b981fe372756 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:55304
I20250114 20:59:16.494031 26347 raft_consensus.cc:2238] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:16.494468 26347 pending_rounds.cc:62] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Trying to abort 1 pending ops.
I20250114 20:59:16.494663 26347 pending_rounds.cc:69] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Aborting op as it isn't in flight: id { term: 1 index: 1 } timestamp: 7114294699622473728 op_type: NO_OP noop_request { }
I20250114 20:59:16.495007 26347 raft_consensus.cc:2883] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: NO_OP replication failed: Aborted: Op aborted
I20250114 20:59:16.495290 26347 raft_consensus.cc:2267] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:16.497296 26347 ts_tablet_manager.cc:1905] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:16.506805 26303 tablet_service.cc:1514] Processing DeleteTablet for tablet fc2d96adc6d34e3ba9075ad391a1f37a with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:55304
I20250114 20:59:16.507249 26347 ts_tablet_manager.cc:1918] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 1.1
I20250114 20:59:16.507588 26347 log.cc:1198] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/wals/1561b3c6c3dc4e4b92775843e64fba46
I20250114 20:59:16.508113 26347 ts_tablet_manager.cc:1939] T 1561b3c6c3dc4e4b92775843e64fba46 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting consensus metadata
I20250114 20:59:16.510311 26347 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:16.510267 25779 catalog_manager.cc:4872] TS 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): tablet 1561b3c6c3dc4e4b92775843e64fba46 (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:16.510974 26347 raft_consensus.cc:2238] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:16.511369 26347 raft_consensus.cc:2267] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:16.513255 26347 ts_tablet_manager.cc:1905] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:16.522529 26347 ts_tablet_manager.cc:1918] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 0.0
I20250114 20:59:16.522792 26347 log.cc:1198] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/wals/8579f5f1de554a51ac89b981fe372756
I20250114 20:59:16.523284 26347 ts_tablet_manager.cc:1939] T 8579f5f1de554a51ac89b981fe372756 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting consensus metadata
I20250114 20:59:16.525158 26347 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:16.525291 25779 catalog_manager.cc:4872] TS 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): tablet 8579f5f1de554a51ac89b981fe372756 (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:16.525750 26347 raft_consensus.cc:2238] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:16.526155 26347 raft_consensus.cc:2267] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:16.528192 26347 ts_tablet_manager.cc:1905] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:16.530222 26348 ts_tablet_manager.cc:927] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:16.531697 26348 tablet_copy_client.cc:323] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:32835
I20250114 20:59:16.535471 26303 tablet_service.cc:1514] Processing DeleteTablet for tablet 15234403822b45e79875e8d0987f01ad with delete_type TABLET_DATA_DELETED (Table deleted at 2025-01-14 20:59:15 UTC) from {username='slave'} at 127.0.0.1:55304
I20250114 20:59:16.539777 26347 ts_tablet_manager.cc:1918] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 0.0
I20250114 20:59:16.540095 26347 log.cc:1198] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/wals/fc2d96adc6d34e3ba9075ad391a1f37a
I20250114 20:59:16.540608 26347 ts_tablet_manager.cc:1939] T fc2d96adc6d34e3ba9075ad391a1f37a P 83d3e5ddd1984a80a46be43f24be46b2: Deleting consensus metadata
I20250114 20:59:16.543285 26347 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:16.543237 26056 tablet_copy_service.cc:140] P 887b7f1840654ca4a8278f3aa3eba169: Received BeginTabletCopySession request for tablet 070a203b588b46e58804064fe5f00952 from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:56394)
I20250114 20:59:16.543421 25779 catalog_manager.cc:4872] TS 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): tablet fc2d96adc6d34e3ba9075ad391a1f37a (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:16.544080 26056 tablet_copy_service.cc:161] P 887b7f1840654ca4a8278f3aa3eba169: Beginning new tablet copy session on tablet 070a203b588b46e58804064fe5f00952 from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:56394: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-070a203b588b46e58804064fe5f00952
I20250114 20:59:16.544457 26347 raft_consensus.cc:2238] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:16.545049 26347 raft_consensus.cc:2267] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:16.548533 26352 ts_tablet_manager.cc:927] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:16.548825 26347 ts_tablet_manager.cc:1905] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250114 20:59:16.550208 26352 tablet_copy_client.cc:323] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:45905
I20250114 20:59:16.551527 26056 tablet_copy_source_session.cc:215] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:16.554419 26348 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 070a203b588b46e58804064fe5f00952. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:16.560683 26347 ts_tablet_manager.cc:1918] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 0.0
I20250114 20:59:16.561013 26347 log.cc:1198] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/wals/15234403822b45e79875e8d0987f01ad
I20250114 20:59:16.561589 26347 ts_tablet_manager.cc:1939] T 15234403822b45e79875e8d0987f01ad P 83d3e5ddd1984a80a46be43f24be46b2: Deleting consensus metadata
I20250114 20:59:16.564268 25779 catalog_manager.cc:4872] TS 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): tablet 15234403822b45e79875e8d0987f01ad (table dugtrio [id=e14b860cd50442dd8e49f707f706ad01]) successfully deleted
I20250114 20:59:16.564692 25982 tablet_copy_service.cc:140] P dd425b8989c641ecba6b4b51af3f9f5c: Received BeginTabletCopySession request for tablet 5f8e59274e0e4d6782564cd3da8080db from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:38948)
I20250114 20:59:16.565161 25982 tablet_copy_service.cc:161] P dd425b8989c641ecba6b4b51af3f9f5c: Beginning new tablet copy session on tablet 5f8e59274e0e4d6782564cd3da8080db from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:38948: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-5f8e59274e0e4d6782564cd3da8080db
I20250114 20:59:16.566094 26348 tablet_copy_client.cc:806] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:16.566625 26348 tablet_copy_client.cc:670] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:16.569849 26348 tablet_copy_client.cc:538] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:16.570597 25982 tablet_copy_source_session.cc:215] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:16.573158 26352 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5f8e59274e0e4d6782564cd3da8080db. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:16.576861 26348 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:16.585958 26352 tablet_copy_client.cc:806] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:16.586468 26352 tablet_copy_client.cc:670] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:16.589907 26352 tablet_copy_client.cc:538] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:16.595319 26348 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.596215 26348 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:16.596873 26348 ts_tablet_manager.cc:1397] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.020s	user 0.008s	sys 0.011s
I20250114 20:59:16.597342 26352 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:16.596539 26313 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.599673 26348 raft_consensus.cc:357] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.600502 26348 raft_consensus.cc:738] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:16.601118 26348 consensus_queue.cc:260] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.607219 26248 heartbeater.cc:502] Master 127.19.228.190:33283 was elected leader, sending a full tablet report...
I20250114 20:59:16.607760 26348 ts_tablet_manager.cc:1428] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.011s	user 0.011s	sys 0.001s
I20250114 20:59:16.612509 26056 tablet_copy_service.cc:342] P 887b7f1840654ca4a8278f3aa3eba169: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-070a203b588b46e58804064fe5f00952 received from {username='slave'} at 127.0.0.1:56394
I20250114 20:59:16.612984 26056 tablet_copy_service.cc:434] P 887b7f1840654ca4a8278f3aa3eba169: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-070a203b588b46e58804064fe5f00952 on tablet 070a203b588b46e58804064fe5f00952 with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:16.621330 26352 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.622016 26352 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:16.622555 26352 ts_tablet_manager.cc:1397] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.025s	user 0.014s	sys 0.004s
I20250114 20:59:16.624504 26352 raft_consensus.cc:357] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.625072 26352 raft_consensus.cc:738] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:16.625505 26352 consensus_queue.cc:260] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.627471 26352 ts_tablet_manager.cc:1428] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:16.628966 25982 tablet_copy_service.cc:342] P dd425b8989c641ecba6b4b51af3f9f5c: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-5f8e59274e0e4d6782564cd3da8080db received from {username='slave'} at 127.0.0.1:38948
I20250114 20:59:16.629391 25982 tablet_copy_service.cc:434] P dd425b8989c641ecba6b4b51af3f9f5c: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-5f8e59274e0e4d6782564cd3da8080db on tablet 5f8e59274e0e4d6782564cd3da8080db with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:16.692016 26313 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.703068 26313 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.717145 26352 ts_tablet_manager.cc:927] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:16.718611 26352 tablet_copy_client.cc:323] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:32835
I20250114 20:59:16.719811 26056 tablet_copy_service.cc:140] P 887b7f1840654ca4a8278f3aa3eba169: Received BeginTabletCopySession request for tablet be4b7e43ed354fe889c83c2c65a05af7 from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:56394)
I20250114 20:59:16.720176 26056 tablet_copy_service.cc:161] P 887b7f1840654ca4a8278f3aa3eba169: Beginning new tablet copy session on tablet be4b7e43ed354fe889c83c2c65a05af7 from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:56394: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-be4b7e43ed354fe889c83c2c65a05af7
I20250114 20:59:16.724808 26056 tablet_copy_source_session.cc:215] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:16.726879 26352 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet be4b7e43ed354fe889c83c2c65a05af7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:16.736646 26352 tablet_copy_client.cc:806] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:16.737152 26352 tablet_copy_client.cc:670] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:16.737428 26313 raft_consensus.cc:2949] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.2: config changed from index -1 to 2, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:16.740540 26352 tablet_copy_client.cc:538] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:16.746234 26352 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:16.751690 26348 ts_tablet_manager.cc:927] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:16.753170 26348 tablet_copy_client.cc:323] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:32835
I20250114 20:59:16.754396 26056 tablet_copy_service.cc:140] P 887b7f1840654ca4a8278f3aa3eba169: Received BeginTabletCopySession request for tablet ee053a9ccc5448e492c8e9ad440cfeb1 from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:56394)
I20250114 20:59:16.754761 26056 tablet_copy_service.cc:161] P 887b7f1840654ca4a8278f3aa3eba169: Beginning new tablet copy session on tablet ee053a9ccc5448e492c8e9ad440cfeb1 from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:56394: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-ee053a9ccc5448e492c8e9ad440cfeb1
I20250114 20:59:16.759706 26056 tablet_copy_source_session.cc:215] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:16.761482 26352 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.762295 26352 tablet_bootstrap.cc:492] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:16.762310 26348 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ee053a9ccc5448e492c8e9ad440cfeb1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:16.763018 26352 ts_tablet_manager.cc:1397] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.017s	user 0.015s	sys 0.003s
I20250114 20:59:16.765693 26352 raft_consensus.cc:357] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.766480 26352 raft_consensus.cc:738] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:16.767091 26352 consensus_queue.cc:260] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.769554 26352 ts_tablet_manager.cc:1428] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.006s	user 0.001s	sys 0.004s
I20250114 20:59:16.771090 26056 tablet_copy_service.cc:342] P 887b7f1840654ca4a8278f3aa3eba169: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-be4b7e43ed354fe889c83c2c65a05af7 received from {username='slave'} at 127.0.0.1:56394
I20250114 20:59:16.771468 26056 tablet_copy_service.cc:434] P 887b7f1840654ca4a8278f3aa3eba169: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-be4b7e43ed354fe889c83c2c65a05af7 on tablet be4b7e43ed354fe889c83c2c65a05af7 with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:16.775812 26348 tablet_copy_client.cc:806] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:16.776612 26348 tablet_copy_client.cc:670] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:16.779511 26348 tablet_copy_client.cc:538] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:16.784873 26348 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:16.798941 26348 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:16.799531 26348 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:16.800027 26348 ts_tablet_manager.cc:1397] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.015s	user 0.012s	sys 0.004s
I20250114 20:59:16.802335 26348 raft_consensus.cc:357] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.802958 26348 raft_consensus.cc:738] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:16.803417 26348 consensus_queue.cc:260] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 2 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:16.805332 26348 ts_tablet_manager.cc:1428] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:16.806720 26056 tablet_copy_service.cc:342] P 887b7f1840654ca4a8278f3aa3eba169: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-ee053a9ccc5448e492c8e9ad440cfeb1 received from {username='slave'} at 127.0.0.1:56394
I20250114 20:59:16.807113 26056 tablet_copy_service.cc:434] P 887b7f1840654ca4a8278f3aa3eba169: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-ee053a9ccc5448e492c8e9ad440cfeb1 on tablet ee053a9ccc5448e492c8e9ad440cfeb1 with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:16.989464 25972 consensus_queue.cc:237] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } }
I20250114 20:59:16.993352 26313 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:16.993525 26045 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:16.994397 26358 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:16.995033 26367 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.000034 26097 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } }
I20250114 20:59:17.001425 26045 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } }
I20250114 20:59:17.004235 26313 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } }
I20250114 20:59:17.007050 25781 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 5f8e59274e0e4d6782564cd3da8080db with cas_config_opid_index 2: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:59:17.011794 25792 catalog_manager.cc:5526] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:17.036201 26213 tablet_service.cc:1514] Processing DeleteTablet for tablet 5f8e59274e0e4d6782564cd3da8080db with delete_type TABLET_DATA_TOMBSTONED (TS 56a9a548d4964c5e8801aaa7fe1e70cc not found in new config with opid_index 3) from {username='slave'} at 127.0.0.1:36612
I20250114 20:59:17.037680 26373 tablet_replica.cc:331] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:17.038285 26373 raft_consensus.cc:2238] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus shutting down.
I20250114 20:59:17.038686 26373 raft_consensus.cc:2267] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus is shut down!
I20250114 20:59:17.040572 26373 ts_tablet_manager.cc:1905] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:17.050294 26373 ts_tablet_manager.cc:1918] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.2
I20250114 20:59:17.050560 26373 log.cc:1198] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root/wals/5f8e59274e0e4d6782564cd3da8080db
I20250114 20:59:17.051616 25779 catalog_manager.cc:4872] TS 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): tablet 5f8e59274e0e4d6782564cd3da8080db (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:17.076130 26045 consensus_queue.cc:237] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:17.079967 25972 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:17.080052 26313 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:17.081130 26116 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.081714 26374 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.086663 26095 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.087900 25972 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.088137 26313 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.097247 25780 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet ee053a9ccc5448e492c8e9ad440cfeb1 with cas_config_opid_index 2: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:59:17.099143 25794 catalog_manager.cc:5526] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.106971 26213 tablet_service.cc:1514] Processing DeleteTablet for tablet ee053a9ccc5448e492c8e9ad440cfeb1 with delete_type TABLET_DATA_TOMBSTONED (TS 56a9a548d4964c5e8801aaa7fe1e70cc not found in new config with opid_index 3) from {username='slave'} at 127.0.0.1:36612
I20250114 20:59:17.107775 26373 tablet_replica.cc:331] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:17.108377 26373 raft_consensus.cc:2238] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus shutting down.
I20250114 20:59:17.108744 26373 raft_consensus.cc:2267] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus is shut down!
I20250114 20:59:17.110991 26373 ts_tablet_manager.cc:1905] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:17.121873 26373 ts_tablet_manager.cc:1918] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.2
I20250114 20:59:17.122167 26373 log.cc:1198] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root/wals/ee053a9ccc5448e492c8e9ad440cfeb1
I20250114 20:59:17.123253 25779 catalog_manager.cc:4872] TS 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): tablet ee053a9ccc5448e492c8e9ad440cfeb1 (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:17.151434 26223 raft_consensus.cc:1212] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Deduplicated request from leader. Original: 1.1->[1.2-1.2]   Dedup: 1.2->[]
I20250114 20:59:17.176049 26223 raft_consensus.cc:1212] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Deduplicated request from leader. Original: 1.1->[1.2-1.2]   Dedup: 1.2->[]
I20250114 20:59:17.300208 26045 consensus_queue.cc:237] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:17.305011 25972 raft_consensus.cc:1270] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:17.305112 26313 raft_consensus.cc:1270] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:17.306380 26374 consensus_queue.cc:1035] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.306919 26116 consensus_queue.cc:1035] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.313686 26374 raft_consensus.cc:2949] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.314570 26313 raft_consensus.cc:2949] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.321408 25972 raft_consensus.cc:2949] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.325363 26045 consensus_queue.cc:237] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } }
I20250114 20:59:17.329881 25792 catalog_manager.cc:5526] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 reported cstate change: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.341725 25780 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet be4b7e43ed354fe889c83c2c65a05af7 with cas_config_opid_index 2: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:59:17.343472 26213 tablet_service.cc:1514] Processing DeleteTablet for tablet be4b7e43ed354fe889c83c2c65a05af7 with delete_type TABLET_DATA_TOMBSTONED (TS 56a9a548d4964c5e8801aaa7fe1e70cc not found in new config with opid_index 3) from {username='slave'} at 127.0.0.1:36612
I20250114 20:59:17.345078 26373 tablet_replica.cc:331] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:17.345885 26373 raft_consensus.cc:2238] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus shutting down.
I20250114 20:59:17.346397 26373 raft_consensus.cc:2267] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus is shut down!
I20250114 20:59:17.348868 26373 ts_tablet_manager.cc:1905] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:17.356945 25972 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:17.358085 26313 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 2. Preceding OpId from leader: term: 1 index: 3. (index mismatch)
I20250114 20:59:17.358542 26095 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.359704 26374 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 3, Last known committed idx: 2, Time since last communication: 0.000s
I20250114 20:59:17.365983 26095 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.367792 26313 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.369606 26373 ts_tablet_manager.cc:1918] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.2
I20250114 20:59:17.370045 26373 log.cc:1198] T be4b7e43ed354fe889c83c2c65a05af7 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root/wals/be4b7e43ed354fe889c83c2c65a05af7
I20250114 20:59:17.368889 25972 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.3: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } }
I20250114 20:59:17.371469 25779 catalog_manager.cc:4872] TS 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): tablet be4b7e43ed354fe889c83c2c65a05af7 (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:17.372566 25971 consensus_queue.cc:237] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:17.375895 25780 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 070a203b588b46e58804064fe5f00952 with cas_config_opid_index 2: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
W20250114 20:59:17.378952 25935 consensus_peers.cc:487] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:17.379575 26313 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:59:17.381453 26045 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:59:17.381261 26367 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:59:17.381076 25793 catalog_manager.cc:5526] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: config changed from index 2 to 3, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) evicted. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } }
I20250114 20:59:17.384300 26367 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:59:17.389036 26358 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.390565 26313 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.392853 26045 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.400521 26213 tablet_service.cc:1514] Processing DeleteTablet for tablet 070a203b588b46e58804064fe5f00952 with delete_type TABLET_DATA_TOMBSTONED (TS 56a9a548d4964c5e8801aaa7fe1e70cc not found in new config with opid_index 3) from {username='slave'} at 127.0.0.1:36612
I20250114 20:59:17.401408 26373 tablet_replica.cc:331] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:17.402112 26373 raft_consensus.cc:2238] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus shutting down.
I20250114 20:59:17.402552 26373 raft_consensus.cc:2267] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Raft consensus is shut down!
I20250114 20:59:17.403648 25793 catalog_manager.cc:5526] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:17.404909 26373 ts_tablet_manager.cc:1905] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:17.420107 26373 ts_tablet_manager.cc:1918] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.2
I20250114 20:59:17.420483 26373 log.cc:1198] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-3-root/wals/070a203b588b46e58804064fe5f00952
I20250114 20:59:17.421689 25779 catalog_manager.cc:4872] TS 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): tablet 070a203b588b46e58804064fe5f00952 (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:17.423786 26045 consensus_queue.cc:237] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
W20250114 20:59:17.428691 26009 consensus_peers.cc:487] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:17.428936 25972 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:59:17.429342 26313 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:59:17.430084 26376 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:59:17.430796 26095 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:59:17.435971 26374 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.437455 25972 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.437674 26313 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.449293 25793 catalog_manager.cc:5526] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:17.450352 26045 consensus_queue.cc:237] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 3, Committed index: 3, Last appended: 1.3, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:17.455358 25972 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
W20250114 20:59:17.456017 26009 consensus_peers.cc:487] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132:37439): Couldn't send request to peer 56a9a548d4964c5e8801aaa7fe1e70cc. Error code: TABLET_NOT_FOUND (6). Status: Illegal state: Tablet not RUNNING: STOPPED. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:17.456610 26095 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:59:17.456926 26313 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 3. Preceding OpId from leader: term: 1 index: 4. (index mismatch)
I20250114 20:59:17.459054 26095 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 3, Time since last communication: 0.000s
I20250114 20:59:17.463843 26374 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.465447 25972 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.469184 26313 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.4: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New config: { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } } }
I20250114 20:59:17.477275 25793 catalog_manager.cc:5526] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 reported cstate change: config changed from index 3 to 4, NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) added. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250114 20:59:17.817376 26383 ts_tablet_manager.cc:927] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905)
I20250114 20:59:17.818811 26383 tablet_copy_client.cc:323] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.130:45905
I20250114 20:59:17.820205 25982 tablet_copy_service.cc:140] P dd425b8989c641ecba6b4b51af3f9f5c: Received BeginTabletCopySession request for tablet 5f8e59274e0e4d6782564cd3da8080db from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:38948)
I20250114 20:59:17.820695 25982 tablet_copy_service.cc:161] P dd425b8989c641ecba6b4b51af3f9f5c: Beginning new tablet copy session on tablet 5f8e59274e0e4d6782564cd3da8080db from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:38948: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-5f8e59274e0e4d6782564cd3da8080db
I20250114 20:59:17.827286 25982 tablet_copy_source_session.cc:215] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:17.829447 26383 ts_tablet_manager.cc:1905] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_COPYING
I20250114 20:59:17.840334 26383 ts_tablet_manager.cc:1918] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 1.2
I20250114 20:59:17.841025 26383 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5f8e59274e0e4d6782564cd3da8080db. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:17.846822 26383 tablet_copy_client.cc:806] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:17.847246 26383 tablet_copy_client.cc:670] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:17.850323 26383 tablet_copy_client.cc:538] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:17.859841 26383 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:17.878574 26383 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:17.879240 26383 tablet_bootstrap.cc:492] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:17.879757 26383 ts_tablet_manager.cc:1397] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.020s	user 0.018s	sys 0.002s
I20250114 20:59:17.881547 26383 raft_consensus.cc:357] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:17.882140 26383 raft_consensus.cc:738] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:17.882624 26383 consensus_queue.cc:260] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:17.885186 26383 ts_tablet_manager.cc:1428] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.005s	user 0.000s	sys 0.004s
I20250114 20:59:17.886996 25982 tablet_copy_service.cc:342] P dd425b8989c641ecba6b4b51af3f9f5c: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-5f8e59274e0e4d6782564cd3da8080db received from {username='slave'} at 127.0.0.1:38948
I20250114 20:59:17.887308 25982 tablet_copy_service.cc:434] P dd425b8989c641ecba6b4b51af3f9f5c: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-5f8e59274e0e4d6782564cd3da8080db on tablet 5f8e59274e0e4d6782564cd3da8080db with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:17.925810 26383 ts_tablet_manager.cc:927] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:17.927421 26383 tablet_copy_client.cc:323] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:32835
I20250114 20:59:17.928843 26056 tablet_copy_service.cc:140] P 887b7f1840654ca4a8278f3aa3eba169: Received BeginTabletCopySession request for tablet 070a203b588b46e58804064fe5f00952 from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:56394)
I20250114 20:59:17.929314 26056 tablet_copy_service.cc:161] P 887b7f1840654ca4a8278f3aa3eba169: Beginning new tablet copy session on tablet 070a203b588b46e58804064fe5f00952 from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:56394: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-070a203b588b46e58804064fe5f00952
I20250114 20:59:17.935070 26056 tablet_copy_source_session.cc:215] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:17.937350 26383 ts_tablet_manager.cc:1905] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_COPYING
I20250114 20:59:17.948925 26383 ts_tablet_manager.cc:1918] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 1.2
I20250114 20:59:17.949436 26383 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 070a203b588b46e58804064fe5f00952. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:17.954489 26383 tablet_copy_client.cc:806] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:17.954957 26383 tablet_copy_client.cc:670] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:17.957823 26383 tablet_copy_client.cc:538] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:17.963901 26383 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:17.983770 26383 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:17.984467 26383 tablet_bootstrap.cc:492] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:17.985090 26383 ts_tablet_manager.cc:1397] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.021s	user 0.019s	sys 0.001s
I20250114 20:59:17.987298 26383 raft_consensus.cc:357] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:17.987915 26383 raft_consensus.cc:738] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:17.988390 26383 consensus_queue.cc:260] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:17.990391 26383 ts_tablet_manager.cc:1428] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.005s	user 0.004s	sys 0.000s
I20250114 20:59:17.991760 26056 tablet_copy_service.cc:342] P 887b7f1840654ca4a8278f3aa3eba169: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-070a203b588b46e58804064fe5f00952 received from {username='slave'} at 127.0.0.1:56394
I20250114 20:59:17.992163 26056 tablet_copy_service.cc:434] P 887b7f1840654ca4a8278f3aa3eba169: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-070a203b588b46e58804064fe5f00952 on tablet 070a203b588b46e58804064fe5f00952 with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:18.026023 26383 ts_tablet_manager.cc:927] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Initiating tablet copy from peer 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835)
I20250114 20:59:18.027336 26383 tablet_copy_client.cc:323] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Beginning tablet copy session from remote peer at address 127.19.228.131:32835
I20250114 20:59:18.028622 26056 tablet_copy_service.cc:140] P 887b7f1840654ca4a8278f3aa3eba169: Received BeginTabletCopySession request for tablet ee053a9ccc5448e492c8e9ad440cfeb1 from peer 56a9a548d4964c5e8801aaa7fe1e70cc ({username='slave'} at 127.0.0.1:56394)
I20250114 20:59:18.029131 26056 tablet_copy_service.cc:161] P 887b7f1840654ca4a8278f3aa3eba169: Beginning new tablet copy session on tablet ee053a9ccc5448e492c8e9ad440cfeb1 from peer 56a9a548d4964c5e8801aaa7fe1e70cc at {username='slave'} at 127.0.0.1:56394: session id = 56a9a548d4964c5e8801aaa7fe1e70cc-ee053a9ccc5448e492c8e9ad440cfeb1
I20250114 20:59:18.034535 26056 tablet_copy_source_session.cc:215] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: Tablet Copy: opened 0 blocks and 1 log segments
I20250114 20:59:18.036656 26383 ts_tablet_manager.cc:1905] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Deleting tablet data with delete state TABLET_DATA_COPYING
I20250114 20:59:18.047187 26383 ts_tablet_manager.cc:1918] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet deleted with delete type TABLET_DATA_COPYING: last-logged OpId 1.2
I20250114 20:59:18.047722 26383 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ee053a9ccc5448e492c8e9ad440cfeb1. 1 dirs total, 0 dirs full, 0 dirs failed
I20250114 20:59:18.052938 26383 tablet_copy_client.cc:806] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 0 data blocks...
I20250114 20:59:18.053373 26383 tablet_copy_client.cc:670] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Starting download of 1 WAL segments...
I20250114 20:59:18.056174 26383 tablet_copy_client.cc:538] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250114 20:59:18.062427 26383 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap starting.
I20250114 20:59:18.078999 26383 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap replayed 1/1 log segments. Stats: ops{read=4 overwritten=0 applied=4 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250114 20:59:18.079604 26383 tablet_bootstrap.cc:492] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Bootstrap complete.
I20250114 20:59:18.080065 26383 ts_tablet_manager.cc:1397] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent bootstrapping tablet: real 0.018s	user 0.020s	sys 0.000s
I20250114 20:59:18.081506 26383 raft_consensus.cc:357] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:18.082137 26383 raft_consensus.cc:738] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Becoming Follower/Learner. State: Replica: 56a9a548d4964c5e8801aaa7fe1e70cc, State: Initialized, Role: LEARNER
I20250114 20:59:18.082541 26383 consensus_queue.cc:260] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 4, Last appended: 1.4, Last appended by leader: 4, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 4 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: NON_VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: true } }
I20250114 20:59:18.084288 26383 ts_tablet_manager.cc:1428] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: Time spent starting tablet: real 0.004s	user 0.004s	sys 0.000s
I20250114 20:59:18.085754 26056 tablet_copy_service.cc:342] P 887b7f1840654ca4a8278f3aa3eba169: Request end of tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-ee053a9ccc5448e492c8e9ad440cfeb1 received from {username='slave'} at 127.0.0.1:56394
I20250114 20:59:18.086150 26056 tablet_copy_service.cc:434] P 887b7f1840654ca4a8278f3aa3eba169: ending tablet copy session 56a9a548d4964c5e8801aaa7fe1e70cc-ee053a9ccc5448e492c8e9ad440cfeb1 on tablet ee053a9ccc5448e492c8e9ad440cfeb1 with peer 56a9a548d4964c5e8801aaa7fe1e70cc
I20250114 20:59:18.391800 26223 raft_consensus.cc:1212] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Deduplicated request from leader. Original: 1.3->[1.4-1.4]   Dedup: 1.4->[]
I20250114 20:59:18.408623 26223 raft_consensus.cc:1212] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Deduplicated request from leader. Original: 1.3->[1.4-1.4]   Dedup: 1.4->[]
I20250114 20:59:18.517871 26223 raft_consensus.cc:1212] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Deduplicated request from leader. Original: 1.3->[1.4-1.4]   Dedup: 1.4->[]
I20250114 20:59:18.898346 26392 raft_consensus.cc:1059] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: attempting to promote NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc to VOTER
I20250114 20:59:18.900506 26392 consensus_queue.cc:237] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } }
I20250114 20:59:18.906198 25972 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.906368 26223 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.907482 26312 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.908490 26392 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.909291 26376 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.909945 26374 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.924552 26367 raft_consensus.cc:1059] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: attempting to promote NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc to VOTER
I20250114 20:59:18.925851 26223 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:18.926796 26367 consensus_queue.cc:237] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } }
I20250114 20:59:18.927724 26398 raft_consensus.cc:1059] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: attempting to promote NON_VOTER 56a9a548d4964c5e8801aaa7fe1e70cc to VOTER
I20250114 20:59:18.928937 26312 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:18.931068 26398 consensus_queue.cc:237] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 1.4, Last appended by leader: 0, Current term: 1, Majority size: 3, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } }
I20250114 20:59:18.922482 26392 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:18.939436 25792 catalog_manager.cc:5526] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc reported cstate change: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:18.943511 26222 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.944459 26313 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.945340 26376 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.945669 25972 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.947912 26376 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.949664 26376 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.972988 26223 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 LEARNER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.973462 26313 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.974344 26396 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.975580 26374 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:18.977008 26396 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.980829 26045 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 4. Preceding OpId from leader: term: 1 index: 5. (index mismatch)
I20250114 20:59:18.980683 26311 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:18.982784 26367 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250114 20:59:18.995608 26222 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.002533 25972 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } attrs { replace: true } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.015118 25971 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.017954 26367 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.021991 26221 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.030637 26045 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.031750 26313 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.5: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.049947 26045 consensus_queue.cc:237] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 1.5, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } }
I20250114 20:59:19.055044 25794 catalog_manager.cc:5526] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 reported cstate change: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } attrs { replace: true } } peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.056550 25794 catalog_manager.cc:5526] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 reported cstate change: config changed from index 4 to 5, 56a9a548d4964c5e8801aaa7fe1e70cc (127.19.228.132) changed from NON_VOTER to VOTER. New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } attrs { replace: true } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.057729 26313 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 5. Preceding OpId from leader: term: 1 index: 6. (index mismatch)
I20250114 20:59:19.059985 26374 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250114 20:59:19.056541 26221 raft_consensus.cc:1270] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 5. Preceding OpId from leader: term: 1 index: 6. (index mismatch)
I20250114 20:59:19.064445 26374 consensus_queue.cc:1035] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250114 20:59:19.067695 26374 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.069097 26313 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.074249 26221 raft_consensus.cc:2949] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.078325 25780 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 070a203b588b46e58804064fe5f00952 with cas_config_opid_index 5: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:59:19.090927 25793 catalog_manager.cc:5526] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc reported cstate change: config changed from index 5 to 6, VOTER dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130) evicted. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.091410 26045 consensus_queue.cc:237] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 1.5, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } }
I20250114 20:59:19.093914 25970 consensus_queue.cc:237] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 1.5, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } }
I20250114 20:59:19.098311 25972 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 5. Preceding OpId from leader: term: 1 index: 6. (index mismatch)
I20250114 20:59:19.100282 26374 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250114 20:59:19.100602 26222 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 5. Preceding OpId from leader: term: 1 index: 6. (index mismatch)
I20250114 20:59:19.102885 25959 tablet_service.cc:1514] Processing DeleteTablet for tablet 070a203b588b46e58804064fe5f00952 with delete_type TABLET_DATA_TOMBSTONED (TS dd425b8989c641ecba6b4b51af3f9f5c not found in new config with opid_index 6) from {username='slave'} at 127.0.0.1:38918
I20250114 20:59:19.105041 26367 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250114 20:59:19.100890 26313 raft_consensus.cc:1270] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Refusing update from remote peer dd425b8989c641ecba6b4b51af3f9f5c: Log matching property violated. Preceding OpId in replica: term: 1 index: 5. Preceding OpId from leader: term: 1 index: 6. (index mismatch)
I20250114 20:59:19.105388 26416 tablet_replica.cc:331] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:19.106725 26416 raft_consensus.cc:2238] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:19.106560 26396 consensus_queue.cc:1035] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [LEADER]: Connected to new peer: Peer: permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250114 20:59:19.107589 26416 raft_consensus.cc:2267] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:19.100716 26221 raft_consensus.cc:1270] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Refusing update from remote peer 887b7f1840654ca4a8278f3aa3eba169: Log matching property violated. Preceding OpId in replica: term: 1 index: 5. Preceding OpId from leader: term: 1 index: 6. (index mismatch)
I20250114 20:59:19.110755 26398 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.113483 26402 consensus_queue.cc:1035] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [LEADER]: Connected to new peer: Peer: permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.001s
I20250114 20:59:19.112383 25970 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.115710 26396 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.122478 26313 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.122778 26221 raft_consensus.cc:2949] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.128129 25794 catalog_manager.cc:5526] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c reported cstate change: config changed from index 5 to 6, VOTER 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129) evicted. New cstate: current_term: 1 leader_uuid: "887b7f1840654ca4a8278f3aa3eba169" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.130213 25780 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet ee053a9ccc5448e492c8e9ad440cfeb1 with cas_config_opid_index 5: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:59:19.130925 25781 catalog_manager.cc:5039] ChangeConfig:REMOVE_PEER RPC for tablet 5f8e59274e0e4d6782564cd3da8080db with cas_config_opid_index 5: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250114 20:59:19.117719 26416 ts_tablet_manager.cc:1905] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:19.119298 26222 raft_consensus.cc:2949] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Committing config change with OpId 1.6: config changed from index 5 to 6, VOTER 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129) evicted. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "887b7f1840654ca4a8278f3aa3eba169" member_type: VOTER last_known_addr { host: "127.19.228.131" port: 32835 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.137315 25793 catalog_manager.cc:5526] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc reported cstate change: config changed from index 5 to 6, VOTER 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131) evicted. New cstate: current_term: 1 leader_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "dd425b8989c641ecba6b4b51af3f9f5c" member_type: VOTER last_known_addr { host: "127.19.228.130" port: 45905 } } peers { permanent_uuid: "83d3e5ddd1984a80a46be43f24be46b2" member_type: VOTER last_known_addr { host: "127.19.228.129" port: 44683 } } peers { permanent_uuid: "56a9a548d4964c5e8801aaa7fe1e70cc" member_type: VOTER last_known_addr { host: "127.19.228.132" port: 37439 } attrs { promote: false } } }
I20250114 20:59:19.141441 26303 tablet_service.cc:1514] Processing DeleteTablet for tablet ee053a9ccc5448e492c8e9ad440cfeb1 with delete_type TABLET_DATA_TOMBSTONED (TS 83d3e5ddd1984a80a46be43f24be46b2 not found in new config with opid_index 6) from {username='slave'} at 127.0.0.1:55304
I20250114 20:59:19.146330 26036 tablet_service.cc:1514] Processing DeleteTablet for tablet 5f8e59274e0e4d6782564cd3da8080db with delete_type TABLET_DATA_TOMBSTONED (TS 887b7f1840654ca4a8278f3aa3eba169 not found in new config with opid_index 6) from {username='slave'} at 127.0.0.1:56356
I20250114 20:59:19.158900 26416 ts_tablet_manager.cc:1918] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.5
I20250114 20:59:19.159247 26416 log.cc:1198] T 070a203b588b46e58804064fe5f00952 P dd425b8989c641ecba6b4b51af3f9f5c: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-1-root/wals/070a203b588b46e58804064fe5f00952
I20250114 20:59:19.160590 25781 catalog_manager.cc:4872] TS dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): tablet 070a203b588b46e58804064fe5f00952 (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:19.160866 26418 tablet_replica.cc:331] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:19.161854 26418 raft_consensus.cc:2238] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:19.162583 26417 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:19.163045 26418 raft_consensus.cc:2267] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:19.163448 26417 raft_consensus.cc:2238] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:19.164307 26417 raft_consensus.cc:2267] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:19.165854 26418 ts_tablet_manager.cc:1905] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:19.166996 26417 ts_tablet_manager.cc:1905] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250114 20:59:19.179982 26418 ts_tablet_manager.cc:1918] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.5
I20250114 20:59:19.180388 26418 log.cc:1198] T 5f8e59274e0e4d6782564cd3da8080db P 887b7f1840654ca4a8278f3aa3eba169: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-2-root/wals/5f8e59274e0e4d6782564cd3da8080db
I20250114 20:59:19.181284 26417 ts_tablet_manager.cc:1918] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.5
I20250114 20:59:19.181660 26417 log.cc:1198] T ee053a9ccc5448e492c8e9ad440cfeb1 P 83d3e5ddd1984a80a46be43f24be46b2: Deleting WAL directory at /tmp/dist-test-taskpzwOya/test-tmp/auto_rebalancer-test.0.AutoRebalancerTest.TestDeletedTables.1736888194149553-20370-0/minicluster-data/ts-0-root/wals/ee053a9ccc5448e492c8e9ad440cfeb1
I20250114 20:59:19.181754 25780 catalog_manager.cc:4872] TS 887b7f1840654ca4a8278f3aa3eba169 (127.19.228.131:32835): tablet 5f8e59274e0e4d6782564cd3da8080db (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:19.182727 25779 catalog_manager.cc:4872] TS 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): tablet ee053a9ccc5448e492c8e9ad440cfeb1 (table test-workload [id=1ae1abdc38cb4632afa1200635244b39]) successfully deleted
I20250114 20:59:21.345333 20370 tablet_server.cc:178] TabletServer@127.19.228.129:44683 shutting down...
I20250114 20:59:21.358922 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:21.359519 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:21.360116 20370 raft_consensus.cc:2238] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.360580 20370 raft_consensus.cc:2267] T be4b7e43ed354fe889c83c2c65a05af7 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.362166 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:21.362602 20370 raft_consensus.cc:2238] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.363078 20370 raft_consensus.cc:2267] T 5f8e59274e0e4d6782564cd3da8080db P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.364595 20370 tablet_replica.cc:331] stopping tablet replica
I20250114 20:59:21.365027 20370 raft_consensus.cc:2238] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.365589 20370 raft_consensus.cc:2267] T 070a203b588b46e58804064fe5f00952 P 83d3e5ddd1984a80a46be43f24be46b2 [term 1 FOLLOWER]: Raft consensus is shut down!
W20250114 20:59:21.377223 26009 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111) [suppressed 42 similar messages]
W20250114 20:59:21.379498 26009 consensus_peers.cc:487] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer 83d3e5ddd1984a80a46be43f24be46b2 (127.19.228.129:44683): Couldn't send request to peer 83d3e5ddd1984a80a46be43f24be46b2. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.129:44683: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:21.385979 20370 tablet_server.cc:195] TabletServer@127.19.228.129:44683 shutdown complete.
I20250114 20:59:21.398830 20370 tablet_server.cc:178] TabletServer@127.19.228.130:0 shutting down...
W20250114 20:59:21.413399 26011 consensus_peers.cc:487] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): Couldn't send request to peer dd425b8989c641ecba6b4b51af3f9f5c. Status: Remote error: Service unavailable: service kudu.consensus.ConsensusService not registered on TabletServer. This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:21.418694 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:21.419387 20370 tablet_replica.cc:331] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:21.419981 20370 raft_consensus.cc:2238] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.420449 20370 raft_consensus.cc:2267] T be4b7e43ed354fe889c83c2c65a05af7 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.422139 20370 tablet_replica.cc:331] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:21.422595 20370 raft_consensus.cc:2238] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.422982 20370 raft_consensus.cc:2267] T ee053a9ccc5448e492c8e9ad440cfeb1 P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.424604 20370 tablet_replica.cc:331] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c: stopping tablet replica
I20250114 20:59:21.425040 20370 raft_consensus.cc:2238] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:21.425709 20370 raft_consensus.cc:2267] T 5f8e59274e0e4d6782564cd3da8080db P dd425b8989c641ecba6b4b51af3f9f5c [term 1 FOLLOWER]: Raft consensus is shut down!
W20250114 20:59:21.441669 26011 consensus_peers.cc:487] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 -> Peer dd425b8989c641ecba6b4b51af3f9f5c (127.19.228.130:45905): Couldn't send request to peer dd425b8989c641ecba6b4b51af3f9f5c. Status: Network error: Client connection negotiation failed: client connection to 127.19.228.130:45905: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250114 20:59:21.447796 20370 tablet_server.cc:195] TabletServer@127.19.228.130:0 shutdown complete.
I20250114 20:59:21.464105 20370 tablet_server.cc:178] TabletServer@127.19.228.131:0 shutting down...
I20250114 20:59:21.483667 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:21.484283 20370 tablet_replica.cc:331] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:21.484972 20370 raft_consensus.cc:2238] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:21.485718 20370 raft_consensus.cc:2267] T 070a203b588b46e58804064fe5f00952 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.487370 20370 tablet_replica.cc:331] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:21.487838 20370 raft_consensus.cc:2238] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:21.488463 20370 raft_consensus.cc:2267] T be4b7e43ed354fe889c83c2c65a05af7 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.490070 20370 tablet_replica.cc:331] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169: stopping tablet replica
I20250114 20:59:21.490455 20370 raft_consensus.cc:2238] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:21.491055 20370 raft_consensus.cc:2267] T ee053a9ccc5448e492c8e9ad440cfeb1 P 887b7f1840654ca4a8278f3aa3eba169 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.512794 20370 tablet_server.cc:195] TabletServer@127.19.228.131:0 shutdown complete.
I20250114 20:59:21.529770 20370 tablet_server.cc:178] TabletServer@127.19.228.132:0 shutting down...
I20250114 20:59:21.547123 20370 ts_tablet_manager.cc:1500] Shutting down tablet manager...
I20250114 20:59:21.547737 20370 tablet_replica.cc:331] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:21.548339 20370 raft_consensus.cc:2238] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.548796 20370 raft_consensus.cc:2267] T 070a203b588b46e58804064fe5f00952 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.550379 20370 tablet_replica.cc:331] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:21.550784 20370 raft_consensus.cc:2238] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.551132 20370 raft_consensus.cc:2267] T ee053a9ccc5448e492c8e9ad440cfeb1 P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.552649 20370 tablet_replica.cc:331] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc: stopping tablet replica
I20250114 20:59:21.553014 20370 raft_consensus.cc:2238] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Raft consensus shutting down.
I20250114 20:59:21.553408 20370 raft_consensus.cc:2267] T 5f8e59274e0e4d6782564cd3da8080db P 56a9a548d4964c5e8801aaa7fe1e70cc [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.573143 20370 tablet_server.cc:195] TabletServer@127.19.228.132:0 shutdown complete.
I20250114 20:59:21.585255 20370 master.cc:537] Master@127.19.228.190:33283 shutting down...
I20250114 20:59:21.605583 20370 raft_consensus.cc:2238] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 1 LEADER]: Raft consensus shutting down.
I20250114 20:59:21.606052 20370 raft_consensus.cc:2267] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250114 20:59:21.606348 20370 tablet_replica.cc:331] T 00000000000000000000000000000000 P 1385538f51144fd59540911c9a61d544: stopping tablet replica
I20250114 20:59:21.625223 20370 master.cc:559] Master@127.19.228.190:33283 shutdown complete.
[       OK ] AutoRebalancerTest.TestDeletedTables (9484 ms)
[----------] 14 tests from AutoRebalancerTest (167345 ms total)

[----------] Global test environment tear-down
[==========] 14 tests from 1 test suite ran. (167346 ms total)
[  PASSED  ] 13 tests.
[  SKIPPED ] 1 test, listed below:
[  SKIPPED ] AutoRebalancerTest.TestMaxMovesPerServer
I20250114 20:59:21.664839 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:5204: suppressed but not reported on 27 messages since previous log ~6 seconds ago
I20250114 20:59:21.665067 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tserver/ts_tablet_manager.cc:594: suppressed but not reported on 7 messages since previous log ~5 seconds ago
I20250114 20:59:21.665243 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tserver/ts_tablet_manager.cc:542: suppressed but not reported on 7 messages since previous log ~5 seconds ago
I20250114 20:59:21.665436 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/master/catalog_manager.cc:4615: suppressed but not reported on 19 messages since previous log ~5 seconds ago
I20250114 20:59:21.665686 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tserver/heartbeater.cc:643: suppressed but not reported on 14 messages since previous log ~149 seconds ago
I20250114 20:59:21.666059 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tablet/tablet.cc:2367: suppressed but not reported on 38 messages since previous log ~16 seconds ago
I20250114 20:59:21.666227 20370 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/rpc/proxy.cc:239: suppressed but not reported on 2 messages since previous log ~0 seconds ago
ThreadSanitizer: reported 5 warnings