{"id":2712,"date":"2023-02-20T19:04:48","date_gmt":"2023-02-20T19:04:48","guid":{"rendered":"https:\/\/brakkee.org\/site\/?p=2712"},"modified":"2023-02-26T22:47:03","modified_gmt":"2023-02-26T22:47:03","slug":"using-argocd-with-k3d-to-manage-another-k3d-cluster","status":"publish","type":"post","link":"https:\/\/brakkee.org\/site\/2023\/02\/20\/using-argocd-with-k3d-to-manage-another-k3d-cluster\/","title":{"rendered":"Using argocd with k3d to manage another k3d cluster"},"content":{"rendered":"<p>I am experimenting currently with <a href=\"https:\/\/argoproj.github.io\/cd\/\">argocd<\/a> with the aim to have an (almost) fully automated bootstrapping of my kubernetes cluster at home. One of the first things to do when experimenting is to have a test environment. There are different deployment options for argocd to consider:<\/p>\n<ul>\n<li>deploy argocd in the cluster that it is managing<\/li>\n<li>deploy argocd in another cluster<\/li>\n<\/ul>\n<p><!--more--><\/p>\n<p>Both have there advantages and disadvantages. With the first option, the advantage is that remote access is not required, but in that case secrets to access the git repository are available on the target cluster which might not be what you want. Also, there is some additional load on the target kubernetes cluster for polling the various git repositories that contain application definitions.<\/p>\n<p>The advantage of the second one is that it is a more natural one. In effect, the cluster is not managing itself but is managed from the outside. Advantage is that git repo secrets are not required on the target cluster. Also, it allows the case where multiple kubernetes clusters must have the same configuration through the argocd ApplicationSet concept.<\/p>\n<h2>Environment setup<\/h2>\n<p>In this post I am investigating the second option for a development environment. There is one cluster<em> k3d-xyz<\/em> that contains the argocd deployment and another one <em>k3d-abc<\/em> that must be managed. Both clusters are created using <em>k3d cluster create<\/em>. Argo is installed on the k3d-xyz cluster as follows using helm:<\/p>\n<pre>helm repo add argo https:\/\/argoproj.github.io\/argo-helm\r\nhelm install argo argo\/argo-cd --namespace argocd --version 5.20.4 \r\n<\/pre>\n<p>Also the argocd command line is installed on the host using:<\/p>\n<pre>curl -sSL -o argocd-linux-amd64 https:\/\/github.com\/argoproj\/argo-cd\/releases\/latest\/download\/argocd-linux-amd64\r\ninstall -m 555 argocd-linux-amd64 ~\/bin\/argocd\r\nrm argocd-linux-amd64\r\n<\/pre>\n<h2>Managing <em>k3d-abc<\/em> through <em>k3d-xyz<\/em><\/h2>\n<p>The standard approach to take is simply to use<\/p>\n<pre>argocd cluster add k3d-abc\r\n<\/pre>\n<p>when the active context is <em>k3d-xyz<\/em>. This will however fail, for obvious reasons, since <em>argocd <\/em>takes the server configuration for <em>k3d-abc<\/em> from the <em>.kube\/config<\/em> file on the host and that contains a URL that is only reachable from the host and not from the docker container in which <em>k3d-xyz<\/em> is running.<\/p>\n<p>To deal with this we must configure the connection from <em>k3d-xyz<\/em> to <em>k3d-abc<\/em> manually using a secret.<\/p>\n<p>The first step of the failed <em>argocd cluster add<\/em> command already created an <em>argocd-manager<\/em> service account on <em>k3d-abc<\/em> so we can reuse that.<br \/>\nThe\u00a0<em>k3d-abc<\/em>\u00a0cluster must be added to the network of the\u00a0<em>k3d-xyz<\/em>\u00a0cluster:<\/p>\n<pre>docker network connect k3d-xyz k3d-abc-server-0\r\n<\/pre>\n<p>This allows the\u00a0<em>k3d-xyz<\/em>\u00a0cluster to access the API-server of\u00a0<em>k3d-abc<\/em>\u00a0on\u00a0<em>k3d-abc-server-0:6443<\/em>. You can verify this by exec-ing into the server container of the\u00a0<em>k3d-xyz<\/em>\u00a0cluster and using telnet.<\/p>\n<p>Next up is to obtain the bearer token of the <em>argocd<\/em> service account on the\u00a0<em>k3d-abc<\/em>\u00a0cluster:<\/p>\n<pre>kubectx k3d-abc\r\nkubectl get sa -n kube-system argocd-manager\r\nTOKEN=\"$( kubectl get secret -n kube-system \\\r\n            argocd-manager-token-cww95  -o json  | \r\n          jq  -r .data.token | base64 -d )\"\r\n<\/pre>\n<p>Note that above the last part of the secret name can\/will differ in your case (just use autocomplete). Also, do <strong>not<\/strong> use the<\/p>\n<pre>  kubectl create token  -n kube-system argocd-manager\r\n<\/pre>\n<p>command since that will create a time-limited token, and we want to use a token in this setup that does not expire.<\/p>\n<p>Next step is to define the cluster secret:<\/p>\n<pre>kind: Secret\r\nmetadata:\r\n  namespace: argocd\r\n  name: k3d-abc-cluster-secret\r\n  labels:\r\n    argocd.argoproj.io\/secret-type: cluster\r\ntype: Opaque\r\nstringData:\r\n  name: k3d-abc\r\n  server: \"https:\/\/k3d-abc-server-0:6443\"\r\n  config: | \r\n    {\r\n      \"bearerToken\": \"TOKEN\",\r\n      \"tlsClientConfig\": { \r\n        \"insecure\": false, \r\n        \"caData\": \"CADATA\"\r\n      } \r\n    }\r\n<\/pre>\n<p>Here, <em>TOKEN<\/em> is the value of the <em>TOKEN<\/em> variable above. <em>CADATA<\/em> is the CA data obtained from the <em>.kube\/config<\/em> file for the <em>k3d-abc<\/em> cluster.<\/p>\n<p>After this, you might need to stop and start the\u00a0<code class=\"notranslate\">k3d-xyz<\/code>\u00a0cluster. This will refresh the DNS entries for the coredns server inside the\u00a0<em>k3d-xyz<\/em>\u00a0cluster so it can resolve\u00a0<em>k3d-abc-server-0<\/em>.<\/p>\n<p>With this approach, I can add an application on\u00a0<em>k3d-xyz<\/em>\u00a0to deploy it on\u00a0<em>k3d-abc<\/em>:<\/p>\n<pre>kind: Application\r\nmetadata: \r\n  name: directory-app\r\n  namespace: argocd\r\nspec: \r\n  destination: \r\n    namespace: directory-app\r\n    server: \"https:\/\/k3d-abc-server-0:6443\"\r\n  project: default\r\n  source: \r\n    path: guestbook-with-sub-directories\r\n    repoURL: \"https:\/\/github.com\/mabusaa\/argocd-example-apps.git\"\r\n    targetRevision: master\r\n  syncPolicy:\r\n    syncOptions:\r\n      - CreateNamespace=true\r\n<\/pre>\n<h2>Gotchas<\/h2>\n<p>The above approach is not perfect. There are issues with it when you restart your machine. In that case, the custom coredns configuration setup at initialuzation of the k3d cluster is lost.<\/p>\n<p>You can verify this by looking at the <em>coredns<\/em> config map on the <em>k3d-xyz<\/em> cluster using<\/p>\n<pre>$  kubectl get cm -n kube-system coredns -o json | \r\njq -r .data.NodeHosts\r\n172.23.0.1 host.k3d.internal\r\n172.23.0.3 registry.localhost\r\n172.23.0.2 k3d-xyz-server-0\r\n172.23.0.4 k3d-xyz-serverlb\r\n172.23.0.5 k3d-abc-server-0\r\n<\/pre>\n<p>Here you should see the <em>k3d-abc-server-0<\/em> host. If you don&#8217;t see this, then simply stopping and starting the <em>k3d-xyz<\/em> cluster will provide a fix:<\/p>\n<pre>k3d cluster stop k3d-xyz\r\nk3d cluster start k3d-xyz\r\n<\/pre>\n<p>An alternative is to create a copy of the coredns configmap and reapply it on startup, then do a<\/p>\n<pre>kubectl rollout restart deploy -n kube-system coredns\r\n<\/pre>\n<p>However, restarting the argocd cluster is just as easy.<\/p>\n<h2>Final thoughts<\/h2>\n<p>The issue with k3d appears to be that upon a restart (even restarting docker will do), the coredns configmap is initialized without the entries from the <a href=\"https:\/\/github.com\/k3d-io\/k3d\/issues\/1112\">docker network<\/a>. For now, the workaround with the restart is the best I can get.<\/p>\n<p>I also tried host mode networking for the <em>k3d-xyz<\/em> cluster which should fix the issue, but which also does not work for some reason. Also, I got some weird messages when getting the initial password for <em>argocd<\/em> like this:<\/p>\n<pre>$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=\"{.data.password}\" | base64 -d\r\nE0220 16:29:03.172722   28050 memcache.go:255] couldn't get resource list for metrics.k8s.io\/v1beta1: the server is currently unable to handle the request\r\n<\/pre>\n<p>Surprisingly, <em>argocd cluster add<\/em> also did not work out of the box. If it would have worked there would have been a limitation on having at most one <em>argocd<\/em> cluster running. For these reasone, I did not pursue this option any further.<\/p>\n<p>This post is based on an answer that I gave recently on a <a href=\"https:\/\/github.com\/k3d-io\/k3d\/discussions\/596\">github discussion<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I am experimenting currently with argocd with the aim to have an (almost) fully automated bootstrapping of my kubernetes cluster at home. One of the first things to do when experimenting is to have a test environment. There are different &hellip; <a href=\"https:\/\/brakkee.org\/site\/2023\/02\/20\/using-argocd-with-k3d-to-manage-another-k3d-cluster\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[10],"tags":[],"_links":{"self":[{"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/posts\/2712"}],"collection":[{"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/comments?post=2712"}],"version-history":[{"count":37,"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/posts\/2712\/revisions"}],"predecessor-version":[{"id":2758,"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/posts\/2712\/revisions\/2758"}],"wp:attachment":[{"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/media?parent=2712"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/categories?post=2712"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/brakkee.org\/site\/wp-json\/wp\/v2\/tags?post=2712"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}