Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting for vcluster to come up #1584

Closed
eumel8 opened this issue Mar 5, 2024 · 6 comments · Fixed by #1594
Closed

Waiting for vcluster to come up #1584

eumel8 opened this issue Mar 5, 2024 · 6 comments · Fixed by #1594
Assignees

Comments

@eumel8
Copy link
Contributor

eumel8 commented Mar 5, 2024

What happened?

I have a vcluster instance created in a namespace. From another pod in the same namespace I want to connect with vcluster cli to the vcluster. The ServiceAccount has permissions to get/list pods. I can also execute vcluster list and can see the vcluster instance, but I can't connect execute vcluster -n <namespace> connect <instance> ... but the only what I get is "Waiting for vcluster to come up". InCluster kube-config is used. Which connections or resources are required to connect? There is also not additional output with debug.

What did you expect to happen?

vcluster -n <namespace> connect <instance> -- kubectl get nodes

How can we reproduce it (as minimally and precisely as possible)?

install vcluster instance and an additionally Pod with vcluster/kubectl cli

Anything else we need to know?

% kubectl -n vc2 exec -it vc2-register-rancher-create-pzkmd -- sh
$ kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
vc2-register-rancher-create-pzkmd   1/1     Running   0          10m
vc2-vcluster-0                      1/1     Running   0          10m

$ vcluster list
  
        NAME     | CLUSTER | NAMESPACE | STATUS  | VERSION | CONNECTED |            CREATED            |  AGE  | DISTRO  
  ---------------+---------+-----------+---------+---------+-----------+-------------------------------+-------+---------
    vc2-vcluster |         | vc2       | Running | 0.19.3  |           | 2024-03-05 23:11:11 +0000 UTC | 6m50s | OSS     
  
$ vcluster -n vc2 connect vc2-vcluster --debug -- kubectl get nodes
23:18:36 debug Error creating pro client: couldn't find vCluster.Pro config: please make sure to run 'vcluster login' to connect to an existing instance or 'vcluster pro start' to deploy a new instance
23:18:36 info Waiting for vcluster to come up...

Host cluster Kubernetes version

$ kubectl version
# 1.26.11

Host cluster Kubernetes distribution

# Rancher RKE1

vlcuster version

$ vcluster --version
# 1.18.1

Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

# k3s

OS and Arch

OS: Ubuntu 22.04
Arch: amd64
@eumel8 eumel8 added the kind/bug label Mar 5, 2024
@eumel8 eumel8 changed the title Waiting for vcluster coming up Waiting for vcluster to come up Mar 5, 2024
@heiko-braun heiko-braun assigned ThomasK33 and unassigned ThomasK33 Mar 13, 2024
@heiko-braun
Copy link
Contributor

heiko-braun commented Mar 13, 2024

Hi @eumel8, thanks for reporting this issue. We will take a look at it.

To help us get started, can you confirm that you can connect to the virtual cluster from outside the host cluster?

Or do you attempt do it from within the pod, because outside access is not possible at all?

@heiko-braun heiko-braun self-assigned this Mar 13, 2024
@heiko-braun
Copy link
Contributor

heiko-braun commented Mar 13, 2024

Hey @eumel8 , I think I have an idea what might be going on.

AFAIK, by default vcluster connect would resolve the URL from the kubeconfig.
For instance, on my local kind cluster it would be this:

cat ~/.kube/config | yq -r '.clusters[1].cluster.server' 
https://127.0.0.1:58756 

Now, obviously this doesn't work within your kubernetes cluster.
What you can try instead is to reference the pod directly:

vcluster connect -n <NAMESPACE> --pod <POD_NAME> <NAME_OF_CLUSTER>

I think that should work from within the cluster.

@eumel8
Copy link
Contributor Author

eumel8 commented Mar 13, 2024

Hello @heiko-braun , thanks for taking a look in this issue. My initial reaction was just wondering, that I can make vcluster list and get a valid output. And then only the connect didn't work. Connect is some kind of port-forwarding and my idea was an network issue in the Pod. I tried also --pod or --address with the Pod address without success.

So, I start debugging and ended up here, with print out the error message:

16:52:46 info Waiting for vcluster to come up...%!(EXTRA *fmt.wrapError=could not Get the vc-vc1-vcluster secret in order to read kubeconfig: secrets "vc-vc1-vcluster" is forbidden: User "system:serviceaccount:vc1:default" cannot get resource "secrets" in API group "" in the namespace "vc1") 

So, this ServiceAccount with this token in this Pod has no access to the Secret with the cluster certificates. After fix RBAC it works then expected:

go run cmd/vclusterctl/main.go connect vc1-vcluster -- kubectl get ns

NAME              STATUS   AGE
default           Active   62m
kube-system       Active   62m
kube-public       Active   62m
kube-node-lease   Active   62m

Solved for me, but debug output can be improved. thx

@heiko-braun
Copy link
Contributor

Hey @eumel8, I am glad you got it working.

And yes, once I tried to reproduce your setup, I've noticed the port-forwarding permissions, amongst others, missing as well.

I am curious, what led you to this particular setup? Why manage virtual cluster from pods within the host cluster?

@eumel8
Copy link
Contributor Author

eumel8 commented Mar 14, 2024

Hi @heiko-braun, Rancher still lags on Vcluster support. There were plans for integration but no progress the last years. As a workaround we have a Helm chart as part of a Crossplane Composition to register the Vcluster in Rancher. This job needs to run partly from the Vcluster, which was this vcluster connect good for. Meanwhile it comes a little bit more complicated because there is kubectl only in the init container available. So, some more workaround to get this running and Vcluster is managed by Rancher. The customer has the same experience like a real Kubernetes Cluster. Alone that part with the Rancher cluster-agent needs to rework to get more robustness. Maybe there are already mechanism in Vcluster to starting things.

@heiko-braun
Copy link
Contributor

@eumel8 thanks for sharing your case. We have been working on improving the rancher integration and there will some announcements soon.

@deniseschannon deniseschannon added the question label May 8, 2024 — with Linear
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants