error marking master: timed out waiting for the condition [kubernetes]












1















Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.



However, when I want to start a kubeadm init command. I get this error message:



[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.003496 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition


According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.



Wrong?



Firewalld does have open ports into firewall. 6443/tcp and 10248-10252










share|improve this question























  • Have you tried to debug output on more advanced level kubeadm init -v 9? What about kubelet service: systemctl status kubelet -l? I don't see here that you have passed --pod-network-cidr option for kubeadm init command for further Pod network installation.

    – mk_sta
    Nov 20 '18 at 14:25











  • This is the output paste.fedoraproject.org you get a lot of error 404 in JSON all the time. (output cut)

    – sincorchetes
    Nov 20 '18 at 16:34











  • kubeadm init --pod-network-cidr=10.0.0.0/16

    – sincorchetes
    Nov 20 '18 at 16:35













  • systemctl status kubelet.service -l paste.fedoraproject.org/paste/hGilqxoKPYqSYNRTP6M4og

    – sincorchetes
    Nov 20 '18 at 16:36
















1















Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.



However, when I want to start a kubeadm init command. I get this error message:



[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.003496 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition


According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.



Wrong?



Firewalld does have open ports into firewall. 6443/tcp and 10248-10252










share|improve this question























  • Have you tried to debug output on more advanced level kubeadm init -v 9? What about kubelet service: systemctl status kubelet -l? I don't see here that you have passed --pod-network-cidr option for kubeadm init command for further Pod network installation.

    – mk_sta
    Nov 20 '18 at 14:25











  • This is the output paste.fedoraproject.org you get a lot of error 404 in JSON all the time. (output cut)

    – sincorchetes
    Nov 20 '18 at 16:34











  • kubeadm init --pod-network-cidr=10.0.0.0/16

    – sincorchetes
    Nov 20 '18 at 16:35













  • systemctl status kubelet.service -l paste.fedoraproject.org/paste/hGilqxoKPYqSYNRTP6M4og

    – sincorchetes
    Nov 20 '18 at 16:36














1












1








1


1






Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.



However, when I want to start a kubeadm init command. I get this error message:



[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.003496 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition


According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.



Wrong?



Firewalld does have open ports into firewall. 6443/tcp and 10248-10252










share|improve this question














Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.



However, when I want to start a kubeadm init command. I get this error message:



[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.003496 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition


According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.



Wrong?



Firewalld does have open ports into firewall. 6443/tcp and 10248-10252







linux docker kubernetes virtualbox






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 19 '18 at 23:14









sincorchetessincorchetes

83




83













  • Have you tried to debug output on more advanced level kubeadm init -v 9? What about kubelet service: systemctl status kubelet -l? I don't see here that you have passed --pod-network-cidr option for kubeadm init command for further Pod network installation.

    – mk_sta
    Nov 20 '18 at 14:25











  • This is the output paste.fedoraproject.org you get a lot of error 404 in JSON all the time. (output cut)

    – sincorchetes
    Nov 20 '18 at 16:34











  • kubeadm init --pod-network-cidr=10.0.0.0/16

    – sincorchetes
    Nov 20 '18 at 16:35













  • systemctl status kubelet.service -l paste.fedoraproject.org/paste/hGilqxoKPYqSYNRTP6M4og

    – sincorchetes
    Nov 20 '18 at 16:36



















  • Have you tried to debug output on more advanced level kubeadm init -v 9? What about kubelet service: systemctl status kubelet -l? I don't see here that you have passed --pod-network-cidr option for kubeadm init command for further Pod network installation.

    – mk_sta
    Nov 20 '18 at 14:25











  • This is the output paste.fedoraproject.org you get a lot of error 404 in JSON all the time. (output cut)

    – sincorchetes
    Nov 20 '18 at 16:34











  • kubeadm init --pod-network-cidr=10.0.0.0/16

    – sincorchetes
    Nov 20 '18 at 16:35













  • systemctl status kubelet.service -l paste.fedoraproject.org/paste/hGilqxoKPYqSYNRTP6M4og

    – sincorchetes
    Nov 20 '18 at 16:36

















Have you tried to debug output on more advanced level kubeadm init -v 9? What about kubelet service: systemctl status kubelet -l? I don't see here that you have passed --pod-network-cidr option for kubeadm init command for further Pod network installation.

– mk_sta
Nov 20 '18 at 14:25





Have you tried to debug output on more advanced level kubeadm init -v 9? What about kubelet service: systemctl status kubelet -l? I don't see here that you have passed --pod-network-cidr option for kubeadm init command for further Pod network installation.

– mk_sta
Nov 20 '18 at 14:25













This is the output paste.fedoraproject.org you get a lot of error 404 in JSON all the time. (output cut)

– sincorchetes
Nov 20 '18 at 16:34





This is the output paste.fedoraproject.org you get a lot of error 404 in JSON all the time. (output cut)

– sincorchetes
Nov 20 '18 at 16:34













kubeadm init --pod-network-cidr=10.0.0.0/16

– sincorchetes
Nov 20 '18 at 16:35







kubeadm init --pod-network-cidr=10.0.0.0/16

– sincorchetes
Nov 20 '18 at 16:35















systemctl status kubelet.service -l paste.fedoraproject.org/paste/hGilqxoKPYqSYNRTP6M4og

– sincorchetes
Nov 20 '18 at 16:36





systemctl status kubelet.service -l paste.fedoraproject.org/paste/hGilqxoKPYqSYNRTP6M4og

– sincorchetes
Nov 20 '18 at 16:36












2 Answers
2






active

oldest

votes


















0














I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version CentOS Linux release 7.5.1804 (Core) and will share them with you, hope it can be helpful to you to get rid of the issue during installation.



First wipe your current cluster installation:



# kubeadm reset -f && rm -rf /etc/kubernetes/


Add Kubernetes repo for further kubeadm, kubelet, kubectl installation:



[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF


Check whether SELinux is in permissive mode:



# getenforce
Permissive


Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl:



# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system


Install required Kubernetes components and start services:



# yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl start docker kubelet && systemctl enable docker kubelet


Deploy the cluster via kubeadm:



kubeadm init --pod-network-cidr=10.244.0.0/16


I prefer to install Flannel as the main CNI in my cluster, although there are some prerequisites for proper Pod network installation, I've passed --pod-network-cidr=10.244.0.0/16 flag to kubeadm init command.



Create Kubernetes Home directory for your user and store config file:



$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config


Install Pod network, in my case it was Flannel:



$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml



Finally check Kubernetes core Pods status:



$ kubectl get pods --all-namespaces



NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system coredns-576cbf47c7-4x7zq 1/1 Running 0 36m
kube-system coredns-576cbf47c7-666jm 1/1 Running 0 36m
kube-system etcd-centos-7-5 1/1 Running 0 35m
kube-system kube-apiserver-centos-7-5 1/1 Running 0 35m
kube-system kube-controller-manager-centos-7-5 1/1 Running 0 35m
kube-system kube-flannel-ds-amd64-2bmw9 1/1 Running 0 33m
kube-system kube-proxy-pcgw8 1/1 Running 0 36m
kube-system kube-scheduler-centos-7-5 1/1 Running 0 35m


In case you still have any doubts, just write down a comment below this answer.






share|improve this answer
























  • ¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

    – sincorchetes
    Nov 24 '18 at 10:22



















0














You are hitting the following issue in kubernetes



https://github.com/kubernetes/kubeadm/issues/1092



The workaround is to provide --node-name=<hostname> . Just go through the above ticket for more info. Hope this helps



EDIT:
I have the same issue in kubeadm-1.10.0
After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name in my cluster






share|improve this answer


























  • I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

    – sincorchetes
    Nov 20 '18 at 0:02













  • Edited my answer, try that

    – Prafull Ladha
    Nov 20 '18 at 0:09











  • Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

    – sincorchetes
    Nov 20 '18 at 0:37











  • Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

    – Prafull Ladha
    Nov 20 '18 at 0:39











  • Yes, I've tried to create cluster without --node-name but I get the same error.

    – sincorchetes
    Nov 20 '18 at 0:58











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53383994%2ferror-marking-master-timed-out-waiting-for-the-condition-kubernetes%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version CentOS Linux release 7.5.1804 (Core) and will share them with you, hope it can be helpful to you to get rid of the issue during installation.



First wipe your current cluster installation:



# kubeadm reset -f && rm -rf /etc/kubernetes/


Add Kubernetes repo for further kubeadm, kubelet, kubectl installation:



[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF


Check whether SELinux is in permissive mode:



# getenforce
Permissive


Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl:



# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system


Install required Kubernetes components and start services:



# yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl start docker kubelet && systemctl enable docker kubelet


Deploy the cluster via kubeadm:



kubeadm init --pod-network-cidr=10.244.0.0/16


I prefer to install Flannel as the main CNI in my cluster, although there are some prerequisites for proper Pod network installation, I've passed --pod-network-cidr=10.244.0.0/16 flag to kubeadm init command.



Create Kubernetes Home directory for your user and store config file:



$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config


Install Pod network, in my case it was Flannel:



$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml



Finally check Kubernetes core Pods status:



$ kubectl get pods --all-namespaces



NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system coredns-576cbf47c7-4x7zq 1/1 Running 0 36m
kube-system coredns-576cbf47c7-666jm 1/1 Running 0 36m
kube-system etcd-centos-7-5 1/1 Running 0 35m
kube-system kube-apiserver-centos-7-5 1/1 Running 0 35m
kube-system kube-controller-manager-centos-7-5 1/1 Running 0 35m
kube-system kube-flannel-ds-amd64-2bmw9 1/1 Running 0 33m
kube-system kube-proxy-pcgw8 1/1 Running 0 36m
kube-system kube-scheduler-centos-7-5 1/1 Running 0 35m


In case you still have any doubts, just write down a comment below this answer.






share|improve this answer
























  • ¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

    – sincorchetes
    Nov 24 '18 at 10:22
















0














I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version CentOS Linux release 7.5.1804 (Core) and will share them with you, hope it can be helpful to you to get rid of the issue during installation.



First wipe your current cluster installation:



# kubeadm reset -f && rm -rf /etc/kubernetes/


Add Kubernetes repo for further kubeadm, kubelet, kubectl installation:



[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF


Check whether SELinux is in permissive mode:



# getenforce
Permissive


Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl:



# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system


Install required Kubernetes components and start services:



# yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl start docker kubelet && systemctl enable docker kubelet


Deploy the cluster via kubeadm:



kubeadm init --pod-network-cidr=10.244.0.0/16


I prefer to install Flannel as the main CNI in my cluster, although there are some prerequisites for proper Pod network installation, I've passed --pod-network-cidr=10.244.0.0/16 flag to kubeadm init command.



Create Kubernetes Home directory for your user and store config file:



$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config


Install Pod network, in my case it was Flannel:



$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml



Finally check Kubernetes core Pods status:



$ kubectl get pods --all-namespaces



NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system coredns-576cbf47c7-4x7zq 1/1 Running 0 36m
kube-system coredns-576cbf47c7-666jm 1/1 Running 0 36m
kube-system etcd-centos-7-5 1/1 Running 0 35m
kube-system kube-apiserver-centos-7-5 1/1 Running 0 35m
kube-system kube-controller-manager-centos-7-5 1/1 Running 0 35m
kube-system kube-flannel-ds-amd64-2bmw9 1/1 Running 0 33m
kube-system kube-proxy-pcgw8 1/1 Running 0 36m
kube-system kube-scheduler-centos-7-5 1/1 Running 0 35m


In case you still have any doubts, just write down a comment below this answer.






share|improve this answer
























  • ¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

    – sincorchetes
    Nov 24 '18 at 10:22














0












0








0







I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version CentOS Linux release 7.5.1804 (Core) and will share them with you, hope it can be helpful to you to get rid of the issue during installation.



First wipe your current cluster installation:



# kubeadm reset -f && rm -rf /etc/kubernetes/


Add Kubernetes repo for further kubeadm, kubelet, kubectl installation:



[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF


Check whether SELinux is in permissive mode:



# getenforce
Permissive


Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl:



# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system


Install required Kubernetes components and start services:



# yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl start docker kubelet && systemctl enable docker kubelet


Deploy the cluster via kubeadm:



kubeadm init --pod-network-cidr=10.244.0.0/16


I prefer to install Flannel as the main CNI in my cluster, although there are some prerequisites for proper Pod network installation, I've passed --pod-network-cidr=10.244.0.0/16 flag to kubeadm init command.



Create Kubernetes Home directory for your user and store config file:



$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config


Install Pod network, in my case it was Flannel:



$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml



Finally check Kubernetes core Pods status:



$ kubectl get pods --all-namespaces



NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system coredns-576cbf47c7-4x7zq 1/1 Running 0 36m
kube-system coredns-576cbf47c7-666jm 1/1 Running 0 36m
kube-system etcd-centos-7-5 1/1 Running 0 35m
kube-system kube-apiserver-centos-7-5 1/1 Running 0 35m
kube-system kube-controller-manager-centos-7-5 1/1 Running 0 35m
kube-system kube-flannel-ds-amd64-2bmw9 1/1 Running 0 33m
kube-system kube-proxy-pcgw8 1/1 Running 0 36m
kube-system kube-scheduler-centos-7-5 1/1 Running 0 35m


In case you still have any doubts, just write down a comment below this answer.






share|improve this answer













I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version CentOS Linux release 7.5.1804 (Core) and will share them with you, hope it can be helpful to you to get rid of the issue during installation.



First wipe your current cluster installation:



# kubeadm reset -f && rm -rf /etc/kubernetes/


Add Kubernetes repo for further kubeadm, kubelet, kubectl installation:



[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF


Check whether SELinux is in permissive mode:



# getenforce
Permissive


Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl:



# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system


Install required Kubernetes components and start services:



# yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl start docker kubelet && systemctl enable docker kubelet


Deploy the cluster via kubeadm:



kubeadm init --pod-network-cidr=10.244.0.0/16


I prefer to install Flannel as the main CNI in my cluster, although there are some prerequisites for proper Pod network installation, I've passed --pod-network-cidr=10.244.0.0/16 flag to kubeadm init command.



Create Kubernetes Home directory for your user and store config file:



$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config


Install Pod network, in my case it was Flannel:



$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml



Finally check Kubernetes core Pods status:



$ kubectl get pods --all-namespaces



NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system coredns-576cbf47c7-4x7zq 1/1 Running 0 36m
kube-system coredns-576cbf47c7-666jm 1/1 Running 0 36m
kube-system etcd-centos-7-5 1/1 Running 0 35m
kube-system kube-apiserver-centos-7-5 1/1 Running 0 35m
kube-system kube-controller-manager-centos-7-5 1/1 Running 0 35m
kube-system kube-flannel-ds-amd64-2bmw9 1/1 Running 0 33m
kube-system kube-proxy-pcgw8 1/1 Running 0 36m
kube-system kube-scheduler-centos-7-5 1/1 Running 0 35m


In case you still have any doubts, just write down a comment below this answer.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 21 '18 at 10:50









mk_stamk_sta

843128




843128













  • ¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

    – sincorchetes
    Nov 24 '18 at 10:22



















  • ¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

    – sincorchetes
    Nov 24 '18 at 10:22

















¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

– sincorchetes
Nov 24 '18 at 10:22





¡Thanks that's works! :D I just only solved problem with authentications but that's it's another post or research info. Thanks a lot!

– sincorchetes
Nov 24 '18 at 10:22













0














You are hitting the following issue in kubernetes



https://github.com/kubernetes/kubeadm/issues/1092



The workaround is to provide --node-name=<hostname> . Just go through the above ticket for more info. Hope this helps



EDIT:
I have the same issue in kubeadm-1.10.0
After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name in my cluster






share|improve this answer


























  • I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

    – sincorchetes
    Nov 20 '18 at 0:02













  • Edited my answer, try that

    – Prafull Ladha
    Nov 20 '18 at 0:09











  • Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

    – sincorchetes
    Nov 20 '18 at 0:37











  • Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

    – Prafull Ladha
    Nov 20 '18 at 0:39











  • Yes, I've tried to create cluster without --node-name but I get the same error.

    – sincorchetes
    Nov 20 '18 at 0:58
















0














You are hitting the following issue in kubernetes



https://github.com/kubernetes/kubeadm/issues/1092



The workaround is to provide --node-name=<hostname> . Just go through the above ticket for more info. Hope this helps



EDIT:
I have the same issue in kubeadm-1.10.0
After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name in my cluster






share|improve this answer


























  • I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

    – sincorchetes
    Nov 20 '18 at 0:02













  • Edited my answer, try that

    – Prafull Ladha
    Nov 20 '18 at 0:09











  • Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

    – sincorchetes
    Nov 20 '18 at 0:37











  • Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

    – Prafull Ladha
    Nov 20 '18 at 0:39











  • Yes, I've tried to create cluster without --node-name but I get the same error.

    – sincorchetes
    Nov 20 '18 at 0:58














0












0








0







You are hitting the following issue in kubernetes



https://github.com/kubernetes/kubeadm/issues/1092



The workaround is to provide --node-name=<hostname> . Just go through the above ticket for more info. Hope this helps



EDIT:
I have the same issue in kubeadm-1.10.0
After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name in my cluster






share|improve this answer















You are hitting the following issue in kubernetes



https://github.com/kubernetes/kubeadm/issues/1092



The workaround is to provide --node-name=<hostname> . Just go through the above ticket for more info. Hope this helps



EDIT:
I have the same issue in kubeadm-1.10.0
After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name in my cluster







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 20 '18 at 0:09

























answered Nov 19 '18 at 23:40









Prafull LadhaPrafull Ladha

3,140320




3,140320













  • I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

    – sincorchetes
    Nov 20 '18 at 0:02













  • Edited my answer, try that

    – Prafull Ladha
    Nov 20 '18 at 0:09











  • Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

    – sincorchetes
    Nov 20 '18 at 0:37











  • Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

    – Prafull Ladha
    Nov 20 '18 at 0:39











  • Yes, I've tried to create cluster without --node-name but I get the same error.

    – sincorchetes
    Nov 20 '18 at 0:58



















  • I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

    – sincorchetes
    Nov 20 '18 at 0:02













  • Edited my answer, try that

    – Prafull Ladha
    Nov 20 '18 at 0:09











  • Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

    – sincorchetes
    Nov 20 '18 at 0:37











  • Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

    – Prafull Ladha
    Nov 20 '18 at 0:39











  • Yes, I've tried to create cluster without --node-name but I get the same error.

    – sincorchetes
    Nov 20 '18 at 0:58

















I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

– sincorchetes
Nov 20 '18 at 0:02







I've added master.vps604805.ovh.net or do kubeadm reset and try with just only vps604805.ovh.net and It's get same error. kubernetes-cni-0.6.0-0.x86_64 kubelet-1.12.2-0.x86_64 kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64 It's interesting this msg when I try to execute kubectl version: The connection to the server localhost:8080 was refused - did you specify the right host or port? `This port is open.

– sincorchetes
Nov 20 '18 at 0:02















Edited my answer, try that

– Prafull Ladha
Nov 20 '18 at 0:09





Edited my answer, try that

– Prafull Ladha
Nov 20 '18 at 0:09













Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

– sincorchetes
Nov 20 '18 at 0:37





Thanks for taking your time. In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I don't have --hostname-override

– sincorchetes
Nov 20 '18 at 0:37













Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

– Prafull Ladha
Nov 20 '18 at 0:39





Could you also remove node-name option ? Kubernetes is going to change the timeout way in 1.13, they have acknowledged it. Till then if we can find a workaround for this

– Prafull Ladha
Nov 20 '18 at 0:39













Yes, I've tried to create cluster without --node-name but I get the same error.

– sincorchetes
Nov 20 '18 at 0:58





Yes, I've tried to create cluster without --node-name but I get the same error.

– sincorchetes
Nov 20 '18 at 0:58


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53383994%2ferror-marking-master-timed-out-waiting-for-the-condition-kubernetes%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How to change which sound is reproduced for terminal bell?

Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents

Can I use Tabulator js library in my java Spring + Thymeleaf project?