Autoscale using Virtual Kubelet within Azure ACI

Cluster and Azure Account Setup

$ az account list -o table

Name             CloudName    SubscriptionId                        State    IsDefault
---------------  -----------  ------------------------------------  -------  -----------
Microsoft Azure  AzureCloud   ab98f6b9-af9a-494c-abd3-ed476101ad0b  Enabled  True

$ export AZURE_SUBSCRIPTION_ID="ab98f6b9-af9a-494c-abd3-ed476101ad0b"

## Enable ACI in your subscription:
$ az provider register -n Microsoft.ContainerInstance
$ az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table
Namespace                    RegistrationState
---------------------------  -------------------
Microsoft.ContainerInstance  Registered

## Create a Resource Group for ACI
$ export ACI_REGION=southeastasia
##$ export ACI_REGION=WestEurope
##$ az group create --name aci-group --location "$ACI_REGION"
##$ export AZURE_RG=aci-group

x Create and install my application cluster

$ export AZURE_APP_RG=louis-rg
$ az group create --name $AZURE_APP_RG --location "$ACI_REGION"
$ export AKS_APP_CLUSTER_NAME=myAppCluster
$ az aks create \
    --resource-group $AZURE_APP_RG \
    --name $AKS_APP_CLUSTER_NAME \
    --node-count 1 \
    --enable-addons monitoring \
    --generate-ssh-keys

$ az aks get-credentials --resource-group $AZURE_APP_RG --name $AKS_APP_CLUSTER_NAME

## Init Helm Tiller in AKS
$ cat >helm-rbac.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF

## reference to Install applications with Helm in AKS [3]
$ kubectl apply -f helm-rbac.yaml

$ helm init --service-account tiller

## Install application
## stable/kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt. see [1][2].
$ export [email protected]
$ helm install stable/kube-lego --name kube-lego --namespace kube-system --set config.LEGO_EMAIL=$LEGO_EMAIL,config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory

## Install ingress controller w/ helm.
## stable/nginx-ingress is an Ingress controller that uses ConfigMap to store the nginx configuration.
$ helm install stable/nginx-ingress --name ingress --namespace kube-system
## a LoadBalancer service, ingress-nginx-ingress-controller, and a Public IP address are created.

## Get the Public IP of the ingress controller.
$ kubectl get services --namespace kube-system --watch
Output:
ingress-nginx-ingress-controller        LoadBalancer   10.0.141.188   52.163.101.207   80:32311/TCP,443:30609/TCP   9m51s

##
$ git clone https://github.com/rbitia/aci-demos.git
$ cd aci-demo/vk-burst-demo
$ export APP_NAME=vk-burst-demo
$ export LB_PUBLIC_ADDRESS=`kubectl get services --namespace kube-system ingress-nginx-ingress-controller -o \
  jsonpath='{.status.loadBalancer.ingress[*].ip}'`
$ ./assignFQDNtoIP.sh -g $AZURE_APP_RG -d $APP_NAME -i $LB_PUBLIC_ADDRESS
Output:
Resource Group is aci-group
DNS name is vk-burst-demo
IP is 52.163.101.207
Start assigning domain name to IP 52.163.101.207...
"vk-burst-demo.southeastasia.cloudapp.azure.com"

## Edit the values.yaml file and replace all <hosts> with the FQDN name in pervious step.
$ vim charts/fr-demo/values.yaml

## Start at the top of the aci-demos directory and deploy the Facial Recognition application that consists of a frontend, a backend, and a set of image recognizers.
$ helm install charts/fr-demo --name demo
## Get the application URL by running these commands:  https://vk-burst-demo.southeastasia.cloudapp.azure.com

x Deployment of the ACI provider in your cluster

$ export VK_RELEASE=virtual-kubelet-latest
## Grab the public master URI for your Kubernetes cluster and save the value.
$ kubectl cluster-info
Kubernetes master is running at https://myappclust-louis-rg-ab98f6-c057ca09.hcp.southeastasia.azmk8s.io:443
Heapster is running at https://myappclust-louis-rg-ab98f6-c057ca09.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
CoreDNS is running at https://myappclust-louis-rg-ab98f6-c057ca09.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://myappclust-louis-rg-ab98f6-c057ca09.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Metrics-server is running at https://myappclust-louis-rg-ab98f6-c057ca09.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

$ export MASTER_URI=https://myappclust-louis-rg-ab98f6-c057ca09.hcp.southeastasia.azmk8s.io:443

$ export RELEASE_NAME=virtual-kubelet
$ export VK_RELEASE=virtual-kubelet-latest
$ export NODE_NAME=virtual-kubelet
$ export CHART_URL=https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/$VK_RELEASE.tgz

$ helm install "$CHART_URL" --name "$RELEASE_NAME" \
  --set provider=azure \
  --set providers.azure.targetAKS=true \
  --set providers.azure.masterUri=$MASTER_URI
NAME:   virtual-kubelet
LAST DEPLOYED: Tue Apr  9 17:46:25 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                                              READY  STATUS             RESTARTS  AGE
virtual-kubelet-virtual-kubelet-5c54c8bdbf-s5pbd  0/1    ContainerCreating  0         0s

==> v1/Secret
NAME                             TYPE    DATA  AGE
virtual-kubelet-virtual-kubelet  Opaque  3     0s

==> v1/ServiceAccount
NAME                             SECRETS  AGE
virtual-kubelet-virtual-kubelet  1        0s

==> v1beta1/ClusterRoleBinding
NAME                             AGE
virtual-kubelet-virtual-kubelet  0s

==> v1beta1/Deployment
NAME                             READY  UP-TO-DATE  AVAILABLE  AGE
virtual-kubelet-virtual-kubelet  0/1    1           0          0s


NOTES:
The virtual kubelet is getting deployed on your cluster.

To verify that virtual kubelet has started, run:

  kubectl --namespace=default get pods -l "app=virtual-kubelet"

Note:
TLS key pair not provided for VK HTTP listener. A key pair was generated for you. This generated key pair is not suitable for production use.

## Verify virtual kubelet is starting.
$ kubectl --namespace=default get pods -l "app=virtual-kubelet"
$ kubectl get nodes -n kube-system

Create an AKS cluster with VNet

Here we choice to manually create and configure AKS (advanced networking supported) and deploy Virtual Kubelet Node assigned with virtual network subnet. This procedure is similar to enable Virtual nodes supported on creating AKS cluster. The Virtual Kubelet component is installed in your AKS cluster that presents ACI as a virtual Kubernetes node. See [2][4] for reference.

## Virtual network is only supported in one of the locations 'EastUS2EUAP,CentralUSEUAP,WestUS,WestCentralUS,NorthEurope,WestEurope,EastUS,AustraliaEast'.

## Create an Azure virtual network and subnets
$ export VNET_RANGE=10.0.0.0/8
$ export CLUSTER_SUBNET_RANGE=10.240.0.0/16 
$ export ACI_SUBNET_RANGE=10.241.0.0/16 
$ export VNET_NAME=myAKSVNet 
$ export CLUSTER_SUBNET_NAME=myAKSSubnet 
$ export ACI_SUBNET_NAME=myACISubnet 
$ export AKS_CLUSTER_RG=myResourceGroup 
$ export KUBE_DNS_IP=10.0.0.10
$ az group create --name $AKS_CLUSTER_RG --location "$ACI_REGION"
$ az network vnet create \
    --resource-group $AKS_CLUSTER_RG \
    --name $VNET_NAME \
    --address-prefixes $VNET_RANGE \
    --subnet-name $CLUSTER_SUBNET_NAME \
    --subnet-prefix $CLUSTER_SUBNET_RANGE
$ az network vnet subnet create \
    --resource-group $AKS_CLUSTER_RG \
    --vnet-name $VNET_NAME \
    --name $ACI_SUBNET_NAME \
    --address-prefix $ACI_SUBNET_RANGE

## Create a service principal
$ az ad sp create-for-rbac -n "virtual-kubelet-sp" --skip-assignment
{
  "appId": "b72e4688-58ac-48ad-a2cb-d075691cb506",
  "displayName": "virtual-kubelet-sp",
  "name": "http://virtual-kubelet-sp",
  "password": "3718d3d9-6da7-4b4b-87dc-0c6a920c8411",
  "tenant": "1995746c-ac93-4961-8e30-a8ab19c92ad6"
}
## Or list by display name.
$ az ad sp list --display-name virtual-kubelet-sp

## OR reset your Service Principal password.
$ az ad sp reset-credentials --name "virtual-kubelet-sp"

$ export AZURE_TENANT_ID=1995746c-ac93-4961-8e30-a8ab19c92ad6
$ export AZURE_CLIENT_ID=b72e4688-58ac-48ad-a2cb-d075691cb506
$ export AZURE_CLIENT_SECRET=3718d3d9-6da7-4b4b-87dc-0c6a920c8411

## Integrating Azure VNet Resource
##   create role assignment to SP
##   NOTE: MUST modify "NetworkContributor" to "Network Contributor". 5/17 has been replaced by "Contributor".
$ export VNET_ID=`az network vnet show --resource-group $AKS_CLUSTER_RG --name $VNET_NAME --query id -o tsv`
$ az role assignment create --assignee $AZURE_CLIENT_ID --scope $VNET_ID --role "Contributor"

## Create an AKS cluster and assign virtual network
  > 1. Virtual Kubelet will be deployed into the AKS cluster.
  > 1. Your application is also running in AKS cluster too.

$ export VNET_SUBNET_ID=`az network vnet subnet show --resource-group $AKS_CLUSTER_RG --vnet-name $VNET_NAME --name $CLUSTER_SUBNET_NAME --query id -o tsv`
$ export AKS_CLUSTER_NAME=myAKSCluster
## Create cluster and specify --service-principal too.
$ az aks create \
    --resource-group $AKS_CLUSTER_RG \
    --name $AKS_CLUSTER_NAME \
    --node-count 1 \
    --network-plugin azure \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip $KUBE_DNS_IP \
    --docker-bridge-address 172.17.0.1/16 \
    --vnet-subnet-id $VNET_SUBNET_ID \
    --service-principal $AZURE_CLIENT_ID \
    --client-secret $AZURE_CLIENT_SECRET \
    --kubernetes-version 1.13.5
    //--enable-addons monitoring

    //--kubernetes-version 1.12.7 \
    //--enable-addons monitoring
    //--generate-ssh-keys

## Configure kubectl to connect to your Kubernetes cluster
$ az aks get-credentials --resource-group $AKS_CLUSTER_RG --name $AKS_CLUSTER_NAME

## Browse the Kubernetes Dashboard.
// for RBAC-enables cluster
$ kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
$ az aks browse --resource-group $AKS_CLUSTER_RG --name $AKS_CLUSTER_NAME

Install Prometheus & Grafana [10]

Enable virtual nodes addon

This step may replace step Deploy Virtual Kubelet below. (here is a bit difference, see [8])

$ az aks enable-addons \
    --resource-group $AKS_CLUSTER_RG \
    --name $AKS_CLUSTER_NAME \
    --addons virtual-node \
    --subnet-name $ACI_SUBNET_NAME

## we might also enable `http_application_routing` addon to export ingress external IP address for sample application. [9]
##  --addons http_application_routing
## az us able to enable multiple addons to contacte all addons and seperated with ','.

## Remove virtual node addon.
$ az aks disable-addons \
    --resource-group $AKS_CLUSTER_RG \
    --name $AKS_CLUSTER_NAME \
    --addons virtual-node

Deploy Virtual Kubelet

## Display AKS cluster information.
$ kubectl cluster-info
Kubernetes master is running at https://myaksclust-myresourcegroup-ab98f6-14304d1c.hcp.southeastasia.azmk8s.io:443
Heapster is running at https://myaksclust-myresourcegroup-ab98f6-14304d1c.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
CoreDNS is running at https://myaksclust-myresourcegroup-ab98f6-14304d1c.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://myaksclust-myresourcegroup-ab98f6-14304d1c.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Metrics-server is running at https://myaksclust-myresourcegroup-ab98f6-14304d1c.hcp.southeastasia.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

$ export MASTER_URI=https://myaksclust-myresourcegroup-ab98f6-14304d1c.hcp.southeastasia.azmk8s.io:443

$ export RELEASE_NAME=virtual-kubelet
$ export NODE_NAME=virtual-kubelet
$ export VK_RELEASE=virtual-kubelet-latest
$ export CHART_URL=https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/$VK_RELEASE.tgz

## Init Helm Tiller in AKS
$ cat >helm-rbac.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF

**reference to Install applications with Helm in AKS [3]**
$ kubectl apply -f helm-rbac.yaml

$ helm init --service-account tiller

## Install ACI connector
## Reference all `--set` parameters in `charts/virtual-kubelet/values.yaml` of $VK_RELEASE.tgz, which are referenced in `charts/virtual-kubelet/templates/deployment.yaml`.
$ helm install "$CHART_URL" --name "$RELEASE_NAME" \
    --set provider=azure \
    --set providers.azure.targetAKS=true \
    --set providers.azure.vnet.enabled=true \
    --set providers.azure.vnet.subnetName=$ACI_SUBNET_NAME \
    --set providers.azure.vent.subnetCidr=$ACI_SUBNET_RANGE \
    --set providers.azure.vnet.clusterCidr=$CLUSTER_SUBNET_RANGE \
    --set providers.azure.vnet.kubeDnsIp=$KUBE_DNS_IP \
    --set providers.azure.masterUri=$MASTER_URI

NAME:   virtual-kubelet
LAST DEPLOYED: Fri Apr 12 11:44:38 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                                              READY  STATUS   RESTARTS  AGE
virtual-kubelet-virtual-kubelet-7696577f5c-fs47d  1/1    Running  0         6s

==> v1/Secret
NAME                             TYPE    DATA  AGE
virtual-kubelet-virtual-kubelet  Opaque  3     6s

==> v1/ServiceAccount
NAME                             SECRETS  AGE
virtual-kubelet-virtual-kubelet  1        6s

==> v1beta1/ClusterRoleBinding
NAME                             AGE
virtual-kubelet-virtual-kubelet  6s

==> v1beta1/Deployment
NAME                             READY  UP-TO-DATE  AVAILABLE  AGE
virtual-kubelet-virtual-kubelet  1/1    1           1          6s


NOTES:
The virtual kubelet is getting deployed on your cluster.

To verify that virtual kubelet has started, run:

  kubectl --namespace=default get pods -l "app=virtual-kubelet"

Note:
TLS key pair not provided for VK HTTP listener. A key pair was generated for you. This generated key pair is not suitable for production use.

Validate the Virtual Kubelet ACI provider

$ kubectl get nodes
NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-31673764-0   Ready    agent   10m   v1.13.5
virtual-node-aci-linux     Ready    agent   28s   v1.13.1-vk-v0.9.0-1-g7b92d1ee-dev

## Deploy a pod in ACI
$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aci-helloworld
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aci-helloworld
  template:
    metadata:
      labels:
        app: aci-helloworld
    spec:
      containers:
      - name: aci-helloworld
        image: microsoft/aci-helloworld
        ports:
        - containerPort: 80
      nodeSelector:
        kubernetes.io/role: agent
        beta.kubernetes.io/os: linux
        type: virtual-kubelet
      tolerations:
      - key: virtual-kubelet.io/provider
        operator: Exists
      - key: azure.com/aci
        effect: NoSchedule
EOF

## helloworld is running in ACI_SUBNET_RANGE(10.241.0.0/16) and virtual-kubelet is running in CLUSTER_SUBNET_RANGE(10.240.0.0/16).
$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE                       NOMINATED NODE   READINESS GATES
default       aci-helloworld-8875447cd-kthr9          1/1     Running   0          6m43s   10.241.0.4    virtual-node-aci-linux     <none>           <none>
kube-system   aci-connector-linux-597b685b6f-dmzk4    1/1     Running   2          9m20s   10.240.0.16   aks-nodepool1-31673764-0   <none>           <none>
kube-system   azure-cni-networkmonitor-f5j2s          1/1     Running   0          19m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   azure-ip-masq-agent-2dnkv               1/1     Running   0          19m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-656cbbf7b9-4xhf2                1/1     Running   0          8m42s   10.240.0.12   aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-656cbbf7b9-5clsn                1/1     Running   0          18m     10.240.0.17   aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-656cbbf7b9-dgr66                1/1     Running   0          22m     10.240.0.32   aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-autoscaler-5966954696-x5gwn     1/1     Running   0          22m     10.240.0.13   aks-nodepool1-31673764-0   <none>           <none>
kube-system   kube-proxy-6dcm4                        1/1     Running   0          8m48s   10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   kube-svc-redirect-j5ctv                 2/2     Running   0          19m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   kubernetes-dashboard-7bcd48b598-q2qbj   1/1     Running   0          22m     10.240.0.23   aks-nodepool1-31673764-0   <none>           <none>
kube-system   metrics-server-86bb5bc4bb-stprg         1/1     Running   0          22m     10.240.0.33   aks-nodepool1-31673764-0   <none>           <none>
kube-system   omsagent-rs-58669cc685-r8gpl            1/1     Running   0          22m     10.240.0.21   aks-nodepool1-31673764-0   <none>           <none>
kube-system   omsagent-rxn45                          1/1     Running   0          19m     10.240.0.10   aks-nodepool1-31673764-0   <none>           <none>
kube-system   tunnelfront-7fb9f8cc54-v28s4            1/1     Running   0          22m     10.240.0.14   aks-nodepool1-31673764-0   <none>           <none>

## Validate that the container is running in an ACI
$ az container list -o table
Name                                    ResourceGroup                                  Status     Image                     IP:ports          Network    CPU/Memory       OsType    Location
--------------------------------------  ---------------------------------------------  ---------  ------------------------  ----------------  ---------  ---------------  --------  -------------
default-aci-helloworld-8875447cd-kthr9  MC_myResourceGroup_myAKSCluster_southeastasia  Succeeded  microsoft/aci-helloworld  10.241.0.4:80,80  Private    1.0 core/1.5 gb  Linux     southeastasia

## Scale deployment
$ kubectl get deploy
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
aci-helloworld   1/1     1            1           8m15s

$ kubectl scale --replicas=5 deployment/aci-helloworld
$ kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP           NODE                     NOMINATED NODE   READINESS GATES
aci-helloworld-8875447cd-5wspx   1/1     Running   0          49s     10.241.0.7   virtual-node-aci-linux   <none>           <none>
aci-helloworld-8875447cd-85svc   1/1     Running   0          49s     10.241.0.5   virtual-node-aci-linux   <none>           <none>
aci-helloworld-8875447cd-kthr9   1/1     Running   0          9m34s   10.241.0.4   virtual-node-aci-linux   <none>           <none>
aci-helloworld-8875447cd-ql4mp   1/1     Running   0          49s     10.241.0.8   virtual-node-aci-linux   <none>           <none>
aci-helloworld-8875447cd-vw5p7   1/1     Running   0          49s     10.241.0.6   virtual-node-aci-linux   <none>           <none>

Workaround for the ACI Connector pod

## View logs of VK
$ k logs -n kube-system aci-connector-linux-597b685b6f-dmzk4

## Edit your aci-connector deployment
$ kubectl cluster-info
Kubernetes master is running at https://myaksclust-myresourcegroup-ab98f6-522a9103.hcp.northeurope.azmk8s.io:443
...
$ kubectl get deploy -n kube-system
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
aci-connector-linux    1/1     1            1           15m
coredns                3/3     3            3           28m
coredns-autoscaler     1/1     1            1           28m
kubernetes-dashboard   1/1     1            1           28m
metrics-server         1/1     1            1           28m
omsagent-rs            1/1     1            1           28m
tunnelfront            1/1     1            1           28m
$ kubectl edit deploy/aci-connector-linux -n kube-system
## edit and save to apply change

Remove virtual node addon

## Remove virtual node addon.
$ az aks disable-addons \
    --resource-group $AKS_CLUSTER_RG \
    --name $AKS_CLUSTER_NAME \
    --addons virtual-node

Test the virtual node pod

$ k get po --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS        RESTARTS   AGE     IP            NODE                       NOMINATED NODE   READINESS GATES
default       aci-helloworld-8875447cd-kthr9          1/1     Running       0          27m     10.241.0.4    virtual-node-aci-linux     <none>           <none>
kube-system   aci-connector-linux-597b685b6f-dmzk4    1/1     Running       2          29m     10.240.0.16   aks-nodepool1-31673764-0   <none>           <none>
kube-system   azure-cni-networkmonitor-f5j2s          1/1     Running       0          39m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   azure-ip-masq-agent-2dnkv               1/1     Running       0          39m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-656cbbf7b9-4xhf2                1/1     Running       0          29m     10.240.0.12   aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-656cbbf7b9-5clsn                1/1     Running       0          39m     10.240.0.17   aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-656cbbf7b9-dgr66                1/1     Running       0          42m     10.240.0.32   aks-nodepool1-31673764-0   <none>           <none>
kube-system   coredns-autoscaler-5966954696-x5gwn     1/1     Running       0          42m     10.240.0.13   aks-nodepool1-31673764-0   <none>           <none>
kube-system   kube-proxy-6dcm4                        1/1     Running       0          29m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   kube-svc-redirect-j5ctv                 2/2     Running       0          39m     10.240.0.4    aks-nodepool1-31673764-0   <none>           <none>
kube-system   kubernetes-dashboard-7bcd48b598-q2qbj   1/1     Running       0          42m     10.240.0.23   aks-nodepool1-31673764-0   <none>           <none>
kube-system   metrics-server-86bb5bc4bb-stprg         1/1     Running       0          42m     10.240.0.33   aks-nodepool1-31673764-0   <none>           <none>
kube-system   omsagent-rs-58669cc685-r8gpl            1/1     Running       0          42m     10.240.0.21   aks-nodepool1-31673764-0   <none>           <none>
kube-system   omsagent-rxn45                          1/1     Running       0          39m     10.240.0.10   aks-nodepool1-31673764-0   <none>           <none>
kube-system   tunnelfront-7fb9f8cc54-v28s4            1/1     Running       0          42m     10.240.0.14   aks-nodepool1-31673764-0   <none>           <none>

$ kubectl run -it --rm virtual-node-test --image=debian --generator=run-pod/v1
$ apt-get update && apt-get install -y curl
$ curl -L http://10.241.0.4

Create Horizontal Pod Autoscaler

$ kubectl autoscale deployment aci-helloworld --cpu-percent=50 --min=1 --max=10
$ kubectl get hpa
NAME             REFERENCE                   TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
aci-helloworld   Deployment/aci-helloworld   <unknown>/50%   1         10        1          24s

## Generate load
$ kubectl run -it --rm virtual-node-test --image=debian --generator=run-pod/v1
$ apt-get update && apt-get install -y wget
$ while true; do wget -q -O- http://10.241.0.4; done

### Q: the cpu of aci-helloworld pod is always 0. How to retrieve real cpu usage of ACI container. refer to: metrics-server is broken, https://github.com/Azure/aks-engine/issues/73

## VK's stats endpoint for hipster and metrics-server to scrape status from. see [5].
$ kubectl run -it --rm virtual-node-test --image=debian --generator=run-pod/v1
$ apt-get update && apt-get install -y curl jq
$ curl http://${ip_address_of_aci-connector}:10255/stats/summary | jq .

More detail of Kubernetes in AKS

  1. Some useful endpoints after k8s cluster setup [6] (TBV)

    you may access these endpoints either by proxy, NodePort, API Server or ingress.

    $ kubectl cluster-info
    Kubernetes master is running at https://myaksclust-myresourcegroup-ab98f6-c92e8469.hcp.northeurope.azmk8s.io:443
    Heapster is running at https://myaksclust-myresourcegroup-ab98f6-c92e8469.hcp.northeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
    CoreDNS is running at https://myaksclust-myresourcegroup-ab98f6-c92e8469.hcp.northeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    kubernetes-dashboard is running at https://myaksclust-myresourcegroup-ab98f6-c92e8469.hcp.northeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
    Metrics-server is running at https://myaksclust-myresourcegroup-ab98f6-c92e8469.hcp.northeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    
    ## Without kubectl proxy
    ### Point to the API server refering the cluster name
    $ APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$AKS_CLUSTER_NAME\")].cluster.server}")
    ### Gets the token value of 'default' service account.
    $ TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
    
    # Or Create a service account with more permissions.
    $ cat <<EOF|kubectl apply -f -
     apiVersion: v1
     kind: ServiceAccount
     metadata:
       name: api-explorer
       namespace: default
     secrets:
     - name: api-explorer-secret
     ---
     apiVersion: v1
     kind: Secret
     metadata:
       name: api-explorer-secret
       annotations:
         kubernetes.io/service-account.name: api-explorer
     type: kubernetes.io/service-account-token
     ---
     apiVersion: rbac.authorization.k8s.io/v1
     kind: ClusterRole
     metadata:
       name: api-explorer-role
     rules:
     - apiGroups: [""]
       resources: ["pods", "nodes", "replicationcontrollers", "events", "limitranges", "services", "apiservices", "horizontalpodautoscalers", "services/proxy"]
       verbs: ["get", "delete", "list", "patch", "update"]
     ---
     apiVersion: rbac.authorization.k8s.io/v1beta1
     kind: ClusterRoleBinding
     metadata:
       name: api-explorer-role-binding
     roleRef:
       kind: ClusterRole
       name: api-explorer-role
       apiGroup: rbac.authorization.k8s.io
     subjects:
     - kind: ServiceAccount
       name: api-explorer
       namespace: default
     EOF
    
    $ kubectl describe secret api-explorer-secret
    Name:         api-explorer-secret
    ...
    Data
    ====
    ca.crt:     1720 bytes
    namespace:  7 bytes
    token: (BEARER TOKEN BASE64 DECODED)
    $ TOKEN=(BEARER TOKEN BASE64 DECODED)
    
    ### Explore the API with TOKEN
    $ curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
    
    ### Delete service account
    $ cat <<EOF|kubectl delete -f -
    ...
    EOF
    
    ## Using kubectl proxy
    $ kubectl proxy --port=8001 &
    Starting to serve on 127.0.0.1:8001
    # explore the API at http://localhost:8001/api/
    
  2. List all resource types

    $ kubectl api-resources
    

Deploy the Prometheus Metric Adapter [7]

# Install Prometheus Operator
$ kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml
# Install Prometheus instance
$ kubectl apply -f online-store/prometheus-config/prometheus
# Expose a Service for Prometheus instance
$ kubectl expose pod prometheus-prometheus-0 --port 9090 --target-port 9090
# Deploy the Prometheus Metric Adapter
$ helm install stable/prometheus-adapter \
    --name prometheus-adaptor \
    -f ./online-store/prometheus-config/prometheus-adapter/values.yaml
# Show metrics
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pod/*/requests_per_second | jq .

# Export Virtual Kubelet node name
$ kubectl get nodes
NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-31673764-0   Ready    agent   90m   v1.12.7
virtual-kubelet            Ready    agent   83m   v1.13.1-vk-v0.7.4-44-g4f3bd20e-dev
$ export VK_NODE_NAME=virtual-kubelet
# Export the ingress external IP address and class annotation
$ az aks enable-addons \
    --resource-group $AKS_CLUSTER_RG \
    --name $AKS_CLUSTER_NAME \
    --addons http_application_routing
$ kubectl get svc --all-namespaces
NAMESPACE     NAME                                                  TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE
...
kube-system   addon-http-application-routing-nginx-ingress          LoadBalancer   10.0.19.21     40.118.126.178   80:31478/TCP,443:30061/TCP   67m
...
$ export INGRESS_EXTERNAL_IP=40.118.126.178
# Export the Ingress controller class annotation
$ export INGRESS_CLASS_ANNOTATION=$(kubectl -n kube-system get po addon-http-application-routing-nginx-ingress-controller-8fvnl7g -o yaml | grep ingress-class | sed -e 's/.*=//')
# Run demo without Application Insights
$ helm install --name vn-affinity ./charts/vn-affinity-admission-controller
$ kubectl label namespace default vn-affinity-injection=enabled
    # p.s. remove label with 'kubectl label namespace default vn-affinity-injection-'
$ helm install ./charts/online-store \
    --name online-store \
    --set counter.specialNodeName=$VK_NODE_NAME,app.ingress.host=store.$INGRESS_EXTERNAL_IP.nip.io,appInsight.enabled=false,app.ingress.annotations."kubernetes\.io/ingress\.class"=$INGRESS_CLASS_ANNOTATION
# STILL FAIL TO INSTALL online-store to ACI.

A sample application to demonstrate Autoscale with AKS Virtual Nodes [9]

# An ingress solution must exist to accept requests from the sample application.
# The easiest way to set this up is by installing the HTTP application routing add-on for AKS.
$ az aks enable-addons \
    --resource-group $AKS_CLUSTER_RG \
    --name $AKS_CLUSTER_NAME \
    --addons http_application_routing

SSH to AKS cluster nodes [11]

NOTES: the ssh default username may be change to netuser instead of azureuser on creating cluster.

Add your public SSH key

# Get the resource group name for your AKS cluster resources
$ export CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group \"$AKS_CLUSTER_RG\" --name \"$AKS_CLUSTER_NAME\" --query nodeResourceGroup -o tsv)

# List the VMs in the AKS cluster resource group
$ az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
Name                    ResourceGroup                                              Location       Zones
----------------------  ---------------------------------------------------------  -------------  -------
aks-default-94919597-0  MC_nbiot-cicd_aks-device-layer-sercomm-cicd_southeastasia  southeastasia
aks-default-94919597-1  MC_nbiot-cicd_aks-device-layer-sercomm-cicd_southeastasia  southeastasia
aks-default-94919597-3  MC_nbiot-cicd_aks-device-layer-sercomm-cicd_southeastasia  southeastasia
aks-default-94919597-4  MC_nbiot-cicd_aks-device-layer-sercomm-cicd_southeastasia  southeastasia

# Add your SSH keys to the node, 
$ az vm user update \
  --resource-group $CLUSTER_RESOURCE_GROUP \
  --name aks-default-94919597-0 \
  --username azureuser \
  --ssh-key-value ~/.ssh/id_rsa.pub

Create the SSH connection

$ kubectl get nodes -o wide
NAME                     STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-default-94919597-0   Ready    agent   105d   v1.12.6   10.240.0.5    <none>        Ubuntu 16.04.5 LTS   4.15.0-1041-azure   docker://3.0.4
aks-default-94919597-1   Ready    agent   105d   v1.12.6   10.240.0.6    <none>        Ubuntu 16.04.5 LTS   4.15.0-1041-azure   docker://3.0.4
aks-default-94919597-3   Ready    agent   105d   v1.12.6   10.240.0.7    <none>        Ubuntu 16.04.5 LTS   4.15.0-1041-azure   docker://3.0.4
aks-default-94919597-4   Ready    agent   105d   v1.12.6   10.240.0.8    <none>        Ubuntu 16.04.5 LTS   4.15.0-1041-azure   docker://3.0.4

# debian is possible to telnet to kube-apiserver.
$ kubectl run -it --rm aks-ssh --image=debian
# busybox is not able to telnet to kube-apiserver. Why ???
### $ kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh

root@aks-ssh-589f4659c5-mn6kr:/# apt-get update && apt-get install openssh-client -y

$ kubectl get pods|grep aks-ssh
aks-ssh-589f4659c5-mn6kr                    1/1     Running   0          2m16s

$ kubectl cp ~/.ssh/id_rsa aks-ssh-589f4659c5-mn6kr:/id_rsa

root@aks-ssh-589f4659c5-mn6kr:/# chmod 0600 id_rsa

root@aks-ssh-589f4659c5-mn6kr:/# ssh -i id_rsa [email protected]

References:

  1. Kubernetes Virtual Kubelet with ACI, https://github.com/virtual-kubelet/virtual-kubelet/blob/master/providers/azure/README.md
  2. Preview - Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes in the Azure portal, https://docs.microsoft.com/en-us/azure/aks/virtual-nodes-portal
  3. Horizontal Pod Autoscaler Walkthrough, https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  4. Scaling options for applications in Azure Kubernetes Service (AKS), https://docs.microsoft.com/en-us/azure/aks/concepts-scale
  5. Add support fort kubelet stats summary, https://github.com/virtual-kubelet/virtual-kubelet/pull/306
  6. Accessing Dashboard 1.7.X and above, https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
  7. Virtual node Autoscale Demo, https://github.com/Azure-Samples/virtual-node-autoscale
  8. Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes using the Azure CLI, https://docs.microsoft.com/zh-tw/azure/aks/virtual-nodes-cli
  9. A sample application to demonstrate Autoscale with AKS Virtual Nodes, https://github.com/Azure-Samples/virtual-node-autoscale
  10. Using Prometheus in Azure Kubernetes Service (AKS), https://itnext.io/using-prometheus-in-azure-kubernetes-service-aks-ae22cada8dd9
  11. Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting, https://docs.microsoft.com/en-us/azure/aks/ssh#add-ssh-keys-to-regular-aks-clusters

results matching ""

    No results matching ""