SolrCloud unexpected recovery after restart












0















I have a SolrCloud (version 6) installation with replication factor 3 and 150 shards across 30 servers.



I see strange behavior after restarting Solr on a single server: sometimes everything is ok and Solr comes up without any problems after replaying commit logs. But more often it starts full recovery from it's replicas. Also it sometimes the recovery is all shards on this node or just a few of them. There are no warning/error logs about any failures before recovering.



Is it possible to stop Solr gracefully?



Also I can't understand why Solr performs loading all data files from replica's index from each shard instead of loading latest changes.










share|improve this question

























  • Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.

    – kellyfj
    Nov 21 '18 at 19:22













  • I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000

    – Vadim PS
    Nov 22 '18 at 6:20


















0















I have a SolrCloud (version 6) installation with replication factor 3 and 150 shards across 30 servers.



I see strange behavior after restarting Solr on a single server: sometimes everything is ok and Solr comes up without any problems after replaying commit logs. But more often it starts full recovery from it's replicas. Also it sometimes the recovery is all shards on this node or just a few of them. There are no warning/error logs about any failures before recovering.



Is it possible to stop Solr gracefully?



Also I can't understand why Solr performs loading all data files from replica's index from each shard instead of loading latest changes.










share|improve this question

























  • Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.

    – kellyfj
    Nov 21 '18 at 19:22













  • I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000

    – Vadim PS
    Nov 22 '18 at 6:20
















0












0








0








I have a SolrCloud (version 6) installation with replication factor 3 and 150 shards across 30 servers.



I see strange behavior after restarting Solr on a single server: sometimes everything is ok and Solr comes up without any problems after replaying commit logs. But more often it starts full recovery from it's replicas. Also it sometimes the recovery is all shards on this node or just a few of them. There are no warning/error logs about any failures before recovering.



Is it possible to stop Solr gracefully?



Also I can't understand why Solr performs loading all data files from replica's index from each shard instead of loading latest changes.










share|improve this question
















I have a SolrCloud (version 6) installation with replication factor 3 and 150 shards across 30 servers.



I see strange behavior after restarting Solr on a single server: sometimes everything is ok and Solr comes up without any problems after replaying commit logs. But more often it starts full recovery from it's replicas. Also it sometimes the recovery is all shards on this node or just a few of them. There are no warning/error logs about any failures before recovering.



Is it possible to stop Solr gracefully?



Also I can't understand why Solr performs loading all data files from replica's index from each shard instead of loading latest changes.







solr solrcloud






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 21 '18 at 20:44









kellyfj

2,15832559




2,15832559










asked Nov 21 '18 at 17:59









Vadim PSVadim PS

62




62













  • Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.

    – kellyfj
    Nov 21 '18 at 19:22













  • I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000

    – Vadim PS
    Nov 22 '18 at 6:20





















  • Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.

    – kellyfj
    Nov 21 '18 at 19:22













  • I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000

    – Vadim PS
    Nov 22 '18 at 6:20



















Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.

– kellyfj
Nov 21 '18 at 19:22







Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.

– kellyfj
Nov 21 '18 at 19:22















I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000

– Vadim PS
Nov 22 '18 at 6:20







I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000

– Vadim PS
Nov 22 '18 at 6:20














1 Answer
1






active

oldest

votes


















0














Your autoCommit setting of 600000 is very high (600 seconds).
What does that mean for SolrCloud practically?



It means that the transaction log has been flushed, but not fsync’d. On a Solr node restart, the node contacts the Cluster leader and either




Replays the documents from its own tlog if < 100 new updates have been received by the leader.




OR




Does an old-style full replication from the leader to catch up if the
leader received > 100 updates while the node was offline.




https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/



My guess is in your case you are getting the latter.
Just be sure to shutdown gracefully via the Solr scripts - make sure you're not doing any "kill -9" and/or ensure that Solr isn't dying to to heap memory problems.



Also one problem I've seen (in SolrCloud 5.3 anyways) is that if you restart a Solr node before ZooKeeper realizes the node has "gone" that SolrCloud can set ZooKeeper into a funky state where it thinks that the Solr node is running but it's not. So one thing I like to do generally is check that all the other nodes know the right state of the system (that a node is "gone") before restarting it.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53418037%2fsolrcloud-unexpected-recovery-after-restart%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Your autoCommit setting of 600000 is very high (600 seconds).
    What does that mean for SolrCloud practically?



    It means that the transaction log has been flushed, but not fsync’d. On a Solr node restart, the node contacts the Cluster leader and either




    Replays the documents from its own tlog if < 100 new updates have been received by the leader.




    OR




    Does an old-style full replication from the leader to catch up if the
    leader received > 100 updates while the node was offline.




    https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/



    My guess is in your case you are getting the latter.
    Just be sure to shutdown gracefully via the Solr scripts - make sure you're not doing any "kill -9" and/or ensure that Solr isn't dying to to heap memory problems.



    Also one problem I've seen (in SolrCloud 5.3 anyways) is that if you restart a Solr node before ZooKeeper realizes the node has "gone" that SolrCloud can set ZooKeeper into a funky state where it thinks that the Solr node is running but it's not. So one thing I like to do generally is check that all the other nodes know the right state of the system (that a node is "gone") before restarting it.






    share|improve this answer




























      0














      Your autoCommit setting of 600000 is very high (600 seconds).
      What does that mean for SolrCloud practically?



      It means that the transaction log has been flushed, but not fsync’d. On a Solr node restart, the node contacts the Cluster leader and either




      Replays the documents from its own tlog if < 100 new updates have been received by the leader.




      OR




      Does an old-style full replication from the leader to catch up if the
      leader received > 100 updates while the node was offline.




      https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/



      My guess is in your case you are getting the latter.
      Just be sure to shutdown gracefully via the Solr scripts - make sure you're not doing any "kill -9" and/or ensure that Solr isn't dying to to heap memory problems.



      Also one problem I've seen (in SolrCloud 5.3 anyways) is that if you restart a Solr node before ZooKeeper realizes the node has "gone" that SolrCloud can set ZooKeeper into a funky state where it thinks that the Solr node is running but it's not. So one thing I like to do generally is check that all the other nodes know the right state of the system (that a node is "gone") before restarting it.






      share|improve this answer


























        0












        0








        0







        Your autoCommit setting of 600000 is very high (600 seconds).
        What does that mean for SolrCloud practically?



        It means that the transaction log has been flushed, but not fsync’d. On a Solr node restart, the node contacts the Cluster leader and either




        Replays the documents from its own tlog if < 100 new updates have been received by the leader.




        OR




        Does an old-style full replication from the leader to catch up if the
        leader received > 100 updates while the node was offline.




        https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/



        My guess is in your case you are getting the latter.
        Just be sure to shutdown gracefully via the Solr scripts - make sure you're not doing any "kill -9" and/or ensure that Solr isn't dying to to heap memory problems.



        Also one problem I've seen (in SolrCloud 5.3 anyways) is that if you restart a Solr node before ZooKeeper realizes the node has "gone" that SolrCloud can set ZooKeeper into a funky state where it thinks that the Solr node is running but it's not. So one thing I like to do generally is check that all the other nodes know the right state of the system (that a node is "gone") before restarting it.






        share|improve this answer













        Your autoCommit setting of 600000 is very high (600 seconds).
        What does that mean for SolrCloud practically?



        It means that the transaction log has been flushed, but not fsync’d. On a Solr node restart, the node contacts the Cluster leader and either




        Replays the documents from its own tlog if < 100 new updates have been received by the leader.




        OR




        Does an old-style full replication from the leader to catch up if the
        leader received > 100 updates while the node was offline.




        https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/



        My guess is in your case you are getting the latter.
        Just be sure to shutdown gracefully via the Solr scripts - make sure you're not doing any "kill -9" and/or ensure that Solr isn't dying to to heap memory problems.



        Also one problem I've seen (in SolrCloud 5.3 anyways) is that if you restart a Solr node before ZooKeeper realizes the node has "gone" that SolrCloud can set ZooKeeper into a funky state where it thinks that the Solr node is running but it's not. So one thing I like to do generally is check that all the other nodes know the right state of the system (that a node is "gone") before restarting it.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 23 '18 at 11:18









        kellyfjkellyfj

        2,15832559




        2,15832559
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53418037%2fsolrcloud-unexpected-recovery-after-restart%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

            ComboBox Display Member on multiple fields

            Is it possible to collect Nectar points via Trainline?