Setting up a kubernetes cluster on Ubuntu 18.04











up vote
0
down vote

favorite












I'm doing this tutorial on creating a kubernetes cluster on Ubuntu 16.04 (I'm using 18.04 but there is no tutorial on that version yet). I finished the first three steps and everything went fine. I'm now trying to initialise the cluster with the master node in it, and I'm a bit stuck.



When I run the master.yml playbook with



ansible-playbook -i hosts ~/kube-cluster/master.yml


I get the following output:



    $ ansible-playbook -i hosts master.yml 
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)

PLAY [master] *********************************************************************************

TASK [Gathering Facts] ************************************************************************
ok: [master]

TASK [initialize the cluster] *****************************************************************
changed: [master]

TASK [create .kube directory] *****************************************************************
[WARNING]: Module remote_tmp /home/ubuntu/.ansible/tmp did not exist and was created with a
mode of 0700, this may cause issues when running as another user. To avoid this, create the
remote_tmp dir with the correct permissions manually

changed: [master]

TASK [copy admin.conf to user's kube config] **************************************************
changed: [master]

TASK [install Pod network] ********************************************************************
changed: [master]

PLAY RECAP ************************************************************************************
master : ok=5 changed=4 unreachable=0 failed=0


The only thing that's different compared to the tutorial is the warning about the /home/ubuntu/.ansible/tmp directory permissions. When I ssh into the master node server and run



kubectl get nodes


I get the following result:



NAME             STATUS     ROLES    AGE   VERSION
ip-address NotReady master 16m v1.12.2


Instead of the desired



NAME             STATUS     ROLES    AGE   VERSION
master Ready master 16m v1.12.2


I've tried to create the tmp directory with the ubuntu user on the server so that the warning is resolved. Unfortunately this does not change anything about the master node not being ready or having its ip address as NAME.



Question: How do I resolve this problem? How can I correctly initialise the cluster so that the master node is configured properly and is ready?










share|improve this question


























    up vote
    0
    down vote

    favorite












    I'm doing this tutorial on creating a kubernetes cluster on Ubuntu 16.04 (I'm using 18.04 but there is no tutorial on that version yet). I finished the first three steps and everything went fine. I'm now trying to initialise the cluster with the master node in it, and I'm a bit stuck.



    When I run the master.yml playbook with



    ansible-playbook -i hosts ~/kube-cluster/master.yml


    I get the following output:



        $ ansible-playbook -i hosts master.yml 
    /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version!
    RequestsDependencyWarning)

    PLAY [master] *********************************************************************************

    TASK [Gathering Facts] ************************************************************************
    ok: [master]

    TASK [initialize the cluster] *****************************************************************
    changed: [master]

    TASK [create .kube directory] *****************************************************************
    [WARNING]: Module remote_tmp /home/ubuntu/.ansible/tmp did not exist and was created with a
    mode of 0700, this may cause issues when running as another user. To avoid this, create the
    remote_tmp dir with the correct permissions manually

    changed: [master]

    TASK [copy admin.conf to user's kube config] **************************************************
    changed: [master]

    TASK [install Pod network] ********************************************************************
    changed: [master]

    PLAY RECAP ************************************************************************************
    master : ok=5 changed=4 unreachable=0 failed=0


    The only thing that's different compared to the tutorial is the warning about the /home/ubuntu/.ansible/tmp directory permissions. When I ssh into the master node server and run



    kubectl get nodes


    I get the following result:



    NAME             STATUS     ROLES    AGE   VERSION
    ip-address NotReady master 16m v1.12.2


    Instead of the desired



    NAME             STATUS     ROLES    AGE   VERSION
    master Ready master 16m v1.12.2


    I've tried to create the tmp directory with the ubuntu user on the server so that the warning is resolved. Unfortunately this does not change anything about the master node not being ready or having its ip address as NAME.



    Question: How do I resolve this problem? How can I correctly initialise the cluster so that the master node is configured properly and is ready?










    share|improve this question
























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I'm doing this tutorial on creating a kubernetes cluster on Ubuntu 16.04 (I'm using 18.04 but there is no tutorial on that version yet). I finished the first three steps and everything went fine. I'm now trying to initialise the cluster with the master node in it, and I'm a bit stuck.



      When I run the master.yml playbook with



      ansible-playbook -i hosts ~/kube-cluster/master.yml


      I get the following output:



          $ ansible-playbook -i hosts master.yml 
      /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version!
      RequestsDependencyWarning)

      PLAY [master] *********************************************************************************

      TASK [Gathering Facts] ************************************************************************
      ok: [master]

      TASK [initialize the cluster] *****************************************************************
      changed: [master]

      TASK [create .kube directory] *****************************************************************
      [WARNING]: Module remote_tmp /home/ubuntu/.ansible/tmp did not exist and was created with a
      mode of 0700, this may cause issues when running as another user. To avoid this, create the
      remote_tmp dir with the correct permissions manually

      changed: [master]

      TASK [copy admin.conf to user's kube config] **************************************************
      changed: [master]

      TASK [install Pod network] ********************************************************************
      changed: [master]

      PLAY RECAP ************************************************************************************
      master : ok=5 changed=4 unreachable=0 failed=0


      The only thing that's different compared to the tutorial is the warning about the /home/ubuntu/.ansible/tmp directory permissions. When I ssh into the master node server and run



      kubectl get nodes


      I get the following result:



      NAME             STATUS     ROLES    AGE   VERSION
      ip-address NotReady master 16m v1.12.2


      Instead of the desired



      NAME             STATUS     ROLES    AGE   VERSION
      master Ready master 16m v1.12.2


      I've tried to create the tmp directory with the ubuntu user on the server so that the warning is resolved. Unfortunately this does not change anything about the master node not being ready or having its ip address as NAME.



      Question: How do I resolve this problem? How can I correctly initialise the cluster so that the master node is configured properly and is ready?










      share|improve this question













      I'm doing this tutorial on creating a kubernetes cluster on Ubuntu 16.04 (I'm using 18.04 but there is no tutorial on that version yet). I finished the first three steps and everything went fine. I'm now trying to initialise the cluster with the master node in it, and I'm a bit stuck.



      When I run the master.yml playbook with



      ansible-playbook -i hosts ~/kube-cluster/master.yml


      I get the following output:



          $ ansible-playbook -i hosts master.yml 
      /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version!
      RequestsDependencyWarning)

      PLAY [master] *********************************************************************************

      TASK [Gathering Facts] ************************************************************************
      ok: [master]

      TASK [initialize the cluster] *****************************************************************
      changed: [master]

      TASK [create .kube directory] *****************************************************************
      [WARNING]: Module remote_tmp /home/ubuntu/.ansible/tmp did not exist and was created with a
      mode of 0700, this may cause issues when running as another user. To avoid this, create the
      remote_tmp dir with the correct permissions manually

      changed: [master]

      TASK [copy admin.conf to user's kube config] **************************************************
      changed: [master]

      TASK [install Pod network] ********************************************************************
      changed: [master]

      PLAY RECAP ************************************************************************************
      master : ok=5 changed=4 unreachable=0 failed=0


      The only thing that's different compared to the tutorial is the warning about the /home/ubuntu/.ansible/tmp directory permissions. When I ssh into the master node server and run



      kubectl get nodes


      I get the following result:



      NAME             STATUS     ROLES    AGE   VERSION
      ip-address NotReady master 16m v1.12.2


      Instead of the desired



      NAME             STATUS     ROLES    AGE   VERSION
      master Ready master 16m v1.12.2


      I've tried to create the tmp directory with the ubuntu user on the server so that the warning is resolved. Unfortunately this does not change anything about the master node not being ready or having its ip address as NAME.



      Question: How do I resolve this problem? How can I correctly initialise the cluster so that the master node is configured properly and is ready?







      networking server ssh maas kubernetes






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 23 at 13:12









      Mr. President

      101




      101






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          I went through your problem and created the same instance of the issue using Vagrant to run the nodes.



          Repo here, if you want to try orchestrating the node setup with vagrant



          Just like you, I ran into the issue you described. It turns out flannel has a couple issues with coredns on ubuntu bionic. Flannel interfers with the coredns setup and causes it to stay in a pending state.



          You can use this to check the pod state




          ubuntu@ubuntu-bionic:~$ kubectl get pods --namespace=kube-system
          NAME READY STATUS RESTARTS AGE
          coredns-576cbf47c7-hlvdj 0/1 Pending 0 52m
          coredns-576cbf47c7-xmljj 0/1 Pending 0 52m
          etcd-ubuntu-bionic 1/1 Running 0 52m
          kube-apiserver-ubuntu-bionic 1/1 Running 0 52m
          kube-controller-manager-ubuntu-bionic 1/1 Running 0 52m
          kube-proxy-gvqk4 1/1 Running 0 52m
          kube-scheduler-ubuntu-bionic 1/1 Running 0 51m
          kubernetes-dashboard-77fd78f978-5flj8 0/1 Pending 0 4m30s



          After a couple searches, I found a link to the fix here on their issues page.



          Install a different CNI, they used weave there.




          kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"




          More details here from the docs



          From there, your containers should start and the coredns pods should be running.




          ubuntu@ubuntu-bionic:~$ kubectl get pods -n kube-system
          NAME READY STATUS RESTARTS AGE
          coredns-576cbf47c7-jrlbb 1/1 Running 0 11m
          coredns-576cbf47c7-nfjq8 1/1 Running 0 11m
          etcd-ubuntu-bionic 1/1 Running 0 10m
          kube-apiserver-ubuntu-bionic 1/1 Running 0 10m
          kube-controller-manager-ubuntu-bionic 1/1 Running 0 10m
          kube-proxy-nrbpx 1/1 Running 0 11m
          kube-scheduler-ubuntu-bionic 1/1 Running 0 10m
          weave-net-459mw 2/2 Running 0 10m



          And finally




          ubuntu@ubuntu-bionic:~$ kubectl get nodes
          NAME STATUS ROLES AGE VERSION
          ubuntu-bionic Ready master 14m v1.12.2






          share|improve this answer





















            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "89"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1095382%2fsetting-up-a-kubernetes-cluster-on-ubuntu-18-04%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote













            I went through your problem and created the same instance of the issue using Vagrant to run the nodes.



            Repo here, if you want to try orchestrating the node setup with vagrant



            Just like you, I ran into the issue you described. It turns out flannel has a couple issues with coredns on ubuntu bionic. Flannel interfers with the coredns setup and causes it to stay in a pending state.



            You can use this to check the pod state




            ubuntu@ubuntu-bionic:~$ kubectl get pods --namespace=kube-system
            NAME READY STATUS RESTARTS AGE
            coredns-576cbf47c7-hlvdj 0/1 Pending 0 52m
            coredns-576cbf47c7-xmljj 0/1 Pending 0 52m
            etcd-ubuntu-bionic 1/1 Running 0 52m
            kube-apiserver-ubuntu-bionic 1/1 Running 0 52m
            kube-controller-manager-ubuntu-bionic 1/1 Running 0 52m
            kube-proxy-gvqk4 1/1 Running 0 52m
            kube-scheduler-ubuntu-bionic 1/1 Running 0 51m
            kubernetes-dashboard-77fd78f978-5flj8 0/1 Pending 0 4m30s



            After a couple searches, I found a link to the fix here on their issues page.



            Install a different CNI, they used weave there.




            kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"




            More details here from the docs



            From there, your containers should start and the coredns pods should be running.




            ubuntu@ubuntu-bionic:~$ kubectl get pods -n kube-system
            NAME READY STATUS RESTARTS AGE
            coredns-576cbf47c7-jrlbb 1/1 Running 0 11m
            coredns-576cbf47c7-nfjq8 1/1 Running 0 11m
            etcd-ubuntu-bionic 1/1 Running 0 10m
            kube-apiserver-ubuntu-bionic 1/1 Running 0 10m
            kube-controller-manager-ubuntu-bionic 1/1 Running 0 10m
            kube-proxy-nrbpx 1/1 Running 0 11m
            kube-scheduler-ubuntu-bionic 1/1 Running 0 10m
            weave-net-459mw 2/2 Running 0 10m



            And finally




            ubuntu@ubuntu-bionic:~$ kubectl get nodes
            NAME STATUS ROLES AGE VERSION
            ubuntu-bionic Ready master 14m v1.12.2






            share|improve this answer

























              up vote
              0
              down vote













              I went through your problem and created the same instance of the issue using Vagrant to run the nodes.



              Repo here, if you want to try orchestrating the node setup with vagrant



              Just like you, I ran into the issue you described. It turns out flannel has a couple issues with coredns on ubuntu bionic. Flannel interfers with the coredns setup and causes it to stay in a pending state.



              You can use this to check the pod state




              ubuntu@ubuntu-bionic:~$ kubectl get pods --namespace=kube-system
              NAME READY STATUS RESTARTS AGE
              coredns-576cbf47c7-hlvdj 0/1 Pending 0 52m
              coredns-576cbf47c7-xmljj 0/1 Pending 0 52m
              etcd-ubuntu-bionic 1/1 Running 0 52m
              kube-apiserver-ubuntu-bionic 1/1 Running 0 52m
              kube-controller-manager-ubuntu-bionic 1/1 Running 0 52m
              kube-proxy-gvqk4 1/1 Running 0 52m
              kube-scheduler-ubuntu-bionic 1/1 Running 0 51m
              kubernetes-dashboard-77fd78f978-5flj8 0/1 Pending 0 4m30s



              After a couple searches, I found a link to the fix here on their issues page.



              Install a different CNI, they used weave there.




              kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"




              More details here from the docs



              From there, your containers should start and the coredns pods should be running.




              ubuntu@ubuntu-bionic:~$ kubectl get pods -n kube-system
              NAME READY STATUS RESTARTS AGE
              coredns-576cbf47c7-jrlbb 1/1 Running 0 11m
              coredns-576cbf47c7-nfjq8 1/1 Running 0 11m
              etcd-ubuntu-bionic 1/1 Running 0 10m
              kube-apiserver-ubuntu-bionic 1/1 Running 0 10m
              kube-controller-manager-ubuntu-bionic 1/1 Running 0 10m
              kube-proxy-nrbpx 1/1 Running 0 11m
              kube-scheduler-ubuntu-bionic 1/1 Running 0 10m
              weave-net-459mw 2/2 Running 0 10m



              And finally




              ubuntu@ubuntu-bionic:~$ kubectl get nodes
              NAME STATUS ROLES AGE VERSION
              ubuntu-bionic Ready master 14m v1.12.2






              share|improve this answer























                up vote
                0
                down vote










                up vote
                0
                down vote









                I went through your problem and created the same instance of the issue using Vagrant to run the nodes.



                Repo here, if you want to try orchestrating the node setup with vagrant



                Just like you, I ran into the issue you described. It turns out flannel has a couple issues with coredns on ubuntu bionic. Flannel interfers with the coredns setup and causes it to stay in a pending state.



                You can use this to check the pod state




                ubuntu@ubuntu-bionic:~$ kubectl get pods --namespace=kube-system
                NAME READY STATUS RESTARTS AGE
                coredns-576cbf47c7-hlvdj 0/1 Pending 0 52m
                coredns-576cbf47c7-xmljj 0/1 Pending 0 52m
                etcd-ubuntu-bionic 1/1 Running 0 52m
                kube-apiserver-ubuntu-bionic 1/1 Running 0 52m
                kube-controller-manager-ubuntu-bionic 1/1 Running 0 52m
                kube-proxy-gvqk4 1/1 Running 0 52m
                kube-scheduler-ubuntu-bionic 1/1 Running 0 51m
                kubernetes-dashboard-77fd78f978-5flj8 0/1 Pending 0 4m30s



                After a couple searches, I found a link to the fix here on their issues page.



                Install a different CNI, they used weave there.




                kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"




                More details here from the docs



                From there, your containers should start and the coredns pods should be running.




                ubuntu@ubuntu-bionic:~$ kubectl get pods -n kube-system
                NAME READY STATUS RESTARTS AGE
                coredns-576cbf47c7-jrlbb 1/1 Running 0 11m
                coredns-576cbf47c7-nfjq8 1/1 Running 0 11m
                etcd-ubuntu-bionic 1/1 Running 0 10m
                kube-apiserver-ubuntu-bionic 1/1 Running 0 10m
                kube-controller-manager-ubuntu-bionic 1/1 Running 0 10m
                kube-proxy-nrbpx 1/1 Running 0 11m
                kube-scheduler-ubuntu-bionic 1/1 Running 0 10m
                weave-net-459mw 2/2 Running 0 10m



                And finally




                ubuntu@ubuntu-bionic:~$ kubectl get nodes
                NAME STATUS ROLES AGE VERSION
                ubuntu-bionic Ready master 14m v1.12.2






                share|improve this answer












                I went through your problem and created the same instance of the issue using Vagrant to run the nodes.



                Repo here, if you want to try orchestrating the node setup with vagrant



                Just like you, I ran into the issue you described. It turns out flannel has a couple issues with coredns on ubuntu bionic. Flannel interfers with the coredns setup and causes it to stay in a pending state.



                You can use this to check the pod state




                ubuntu@ubuntu-bionic:~$ kubectl get pods --namespace=kube-system
                NAME READY STATUS RESTARTS AGE
                coredns-576cbf47c7-hlvdj 0/1 Pending 0 52m
                coredns-576cbf47c7-xmljj 0/1 Pending 0 52m
                etcd-ubuntu-bionic 1/1 Running 0 52m
                kube-apiserver-ubuntu-bionic 1/1 Running 0 52m
                kube-controller-manager-ubuntu-bionic 1/1 Running 0 52m
                kube-proxy-gvqk4 1/1 Running 0 52m
                kube-scheduler-ubuntu-bionic 1/1 Running 0 51m
                kubernetes-dashboard-77fd78f978-5flj8 0/1 Pending 0 4m30s



                After a couple searches, I found a link to the fix here on their issues page.



                Install a different CNI, they used weave there.




                kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"




                More details here from the docs



                From there, your containers should start and the coredns pods should be running.




                ubuntu@ubuntu-bionic:~$ kubectl get pods -n kube-system
                NAME READY STATUS RESTARTS AGE
                coredns-576cbf47c7-jrlbb 1/1 Running 0 11m
                coredns-576cbf47c7-nfjq8 1/1 Running 0 11m
                etcd-ubuntu-bionic 1/1 Running 0 10m
                kube-apiserver-ubuntu-bionic 1/1 Running 0 10m
                kube-controller-manager-ubuntu-bionic 1/1 Running 0 10m
                kube-proxy-nrbpx 1/1 Running 0 11m
                kube-scheduler-ubuntu-bionic 1/1 Running 0 10m
                weave-net-459mw 2/2 Running 0 10m



                And finally




                ubuntu@ubuntu-bionic:~$ kubectl get nodes
                NAME STATUS ROLES AGE VERSION
                ubuntu-bionic Ready master 14m v1.12.2







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 25 at 10:45









                Bakare Emmanuel

                262




                262






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Ask Ubuntu!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1095382%2fsetting-up-a-kubernetes-cluster-on-ubuntu-18-04%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                    ComboBox Display Member on multiple fields

                    Is it possible to collect Nectar points via Trainline?