Deploy Akash Provider with kubeadm, containerd, gvisor

This write-up follows you through the Akash Provider deployment using kubeadm

Deploy Akash Provider with kubeadm, containerd, gvisor

This write-up follows you through the necessary configuration & setup steps required for you to run the Akash Provider on your own Linux distro. (I used x86_64 Ubuntu Focal).
The steps to register and activate Akash Provider are also included.

We are going to be using containerd so there is no need installing docker!

Neither I've used kubespray as the official doc suggests. That is because I want to have more control over every gear in the system neither I want to install the docker.

Preparation

Tune netfilter

kube-proxy needs net.bridge.bridge-nf-call-iptables enabled
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

Install CNI plugins

Container Network Interface (CNI) - required for most pod network.
cd
mkdir -p /etc/cni/net.d /opt/cni/bin
CNI_ARCH=amd64
CNI_VERSION=0.9.1
URL=https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-${CNI_ARCH}-v${CNI_VERSION}.tgz
curl -sSL $URL | tar -xz -C /opt/cni/bin

Install crictl

Kubelet Container Runtime Interface (CRI) - required by kubeadm, kubelet.
INSTALL_DIR=/usr/local/bin
mkdir -p $INSTALL_DIR
CRICTL_VERSION="v1.21.0"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -xz -C $INSTALL_DIR

Update /etc/crictl.yaml with the following lines:

runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
#debug: true
pull-image-on-create: false
disable-pull-on-run: false
/etc/crictl.yaml

Install gVisor

gVisor is an application kernel for containers that provides efficient defense-in-​depth anywhere.
Good and quick container runtimes comparison is here.
curl -fsSL https://gvisor.dev/archive.key | apt-key add -
add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases release main"
apt-get update
apt-get install -y runsc
Update on 23 July 2021: I've noticed the gVisor repo GPG key has expired on 9th of July 2021. You can fix it by running wget -qO - https://raw.githubusercontent.com/google/gvisor/ca255741c92e04899ac2f49226d1abec6589cb8c/website/archive.key | apt-key add - for now.

Configure containerd to use gVisor

Now that Kubernetes is going to use containerd (you will see this later, when we will bootstrap it using kubeadm), we will need to configure it to use gVisor runtime.

Update /etc/containerd/config.toml with the following lines:

# version MUST present, otherwise containerd won't pick the runsc !
version = 2

#disabled_plugins = ["cri"]

[plugins."io.containerd.runtime.v1.linux"]
  shim_debug = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
  runtime_type = "io.containerd.runsc.v1"
/etc/containerd/config.toml

And restart containerd service:

systemctl restart containerd
gVisor (runsc) isn't working with the systemd-cgroup nor cgroup v2 yet, there are two issues open if you wish to follow them up:
systemd-cgroup support #193
Support cgroup v2 in runsc #3481

Test gVisor is working

cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
	"cniVersion": "0.2.0",
	"name": "lo",
	"type": "loopback"
}
EOF

# cat sandbox.json 
{
    "metadata": {
        "name": "nginx-sandbox",
        "namespace": "default",
        "attempt": 1,
        "uid": "hdishd83djaidwnduwk28bcsb"
    },
    "linux": {
    },
    "log_directory": "/tmp"
}

SANDBOX_ID=$(crictl runp --runtime runsc sandbox.json)

cat <<EOF | tee container.json
{
  "metadata": {
      "name": "nginx"
    },
  "image":{
      "image": "nginx"
    },
  "log_path":"nginx.0.log",
  "linux": {
  }
}
EOF

CONTAINER_ID=$(crictl create ${SANDBOX_ID} container.json sandbox.json)
crictl start ${CONTAINER_ID}

crictl inspectp ${SANDBOX_ID}
crictl inspect ${CONTAINER_ID}

# You should see "Starting gVisor" -> this means gVisor is working as expected!

# crictl exec ${CONTAINER_ID} dmesg | grep -i gvisor
[    0.000000] Starting gVisor...

crictl ps -a
crictl pods
crictl images

## Tearing down
crictl stop ${CONTAINER_ID} 
crictl rm ${CONTAINER_ID} 
crictl stopp ${SANDBOX_ID} 
crictl rmp ${SANDBOX_ID} 

rm /etc/cni/net.d/99-loopback.conf

Install Kubernetes

Install latest stable kubeadm, kubelet, kubectl and add a kubelet systemd service

INSTALL_DIR=/usr/local/bin
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
cd $INSTALL_DIR
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}

RELEASE_VERSION="v0.9.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${INSTALL_DIR}:g" | tee /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${INSTALL_DIR}:g" | tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl enable kubelet

Install Kubernetes using kubeadm

Feel free to adjust podSubnet & serviceSubnet and other control plane configuration to your needs.
For more flags refer to https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
cat > kubeadm-config.yaml << EOF
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
kubernetesVersion: v1.21.2
networking:
  podSubnet: "10.233.64.0/18" # --pod-network-cidr, taken from kubespray
  serviceSubnet: "10.233.0.0/18" # --service-cidr, taken from kubespray
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: cgroupfs
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock # --cri-socket=unix:///run/containerd/containerd.sock
localAPIEndpoint:
  advertiseAddress: 0.0.0.0
  bindPort: 6443
EOF

kubeadm init --config kubeadm-config.yaml

If you will see "Your Kubernetes control-plane has initialized successfully!" message, then everything went successfully and you now have your Kubernetes control-plane node at your service!

You will also see the kubeadm spit kubeadm join command with the --token, keep it safe as this command is required for joining more nodes (worker nodes, data nodes) should you want to and depending on what type of architecture you want.

Check your nodes

You will always want to set KUBECONFIG variable whenever you want to talk to your Kubernetes cluster. Keep your /etc/kubernetes/admin.conf safe as it's your Kuberentes admin key which lets you do everything with your K8s cluster.
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get nodes -o wide

Install Calico network

cd
wget https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f tigera-operator.yaml

wget https://docs.projectcalico.org/manifests/custom-resources.yaml

## adjust CIDR to your podSubnet, e.g. 10.233.64.0/18 in our case we used when created the K8s cluster using kubeadm command
vim custom-resources.yaml

kubectl create -f custom-resources.yaml 

watch kubectl get pods -n calico-system

By default, your K8s cluster will not schedule Pods on the control-plane node for security reasons. Either remove the taints on the master so that you can schedule pods on it using the kubectl taint nodes command OR use kubectl join to join worker nodes which will run calico (but make sure to perform the preparation steps (install CNI plugins, install crictl, configure Kubernetes to use gVisor).

Remove the taints on the master:

# kubectl describe node <node> |grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule

# kubectl taint nodes --all node-role.kubernetes.io/master-

Check your nodes and pods

# kubectl get nodes -o wide --show-labels
NAME          STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME    LABELS
lamborghini   Ready    control-plane,master   10m   v1.21.2   149.202.82.160   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.4.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=lamborghini,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
# kubectl get pods --all-namespaces 
NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
calico-system     calico-kube-controllers-7f58dbcbbd-4fwlj   1/1     Running   0          2m
calico-system     calico-node-qvs6t                          1/1     Running   0          2m
calico-system     calico-typha-859f9f8b5b-x9djb              1/1     Running   0          2m
kube-system       coredns-558bd4d5db-c7fsf                   1/1     Running   0          3m12s
kube-system       coredns-558bd4d5db-xfmrt                   1/1     Running   0          3m12s
kube-system       etcd-lamborghini                           1/1     Running   0          3m27s
kube-system       kube-apiserver-lamborghini                 1/1     Running   0          3m27s
kube-system       kube-controller-manager-lamborghini        1/1     Running   0          3m19s
kube-system       kube-proxy-nn5ws                           1/1     Running   0          3m12s
kube-system       kube-scheduler-lamborghini                 1/1     Running   0          3m19s
tigera-operator   tigera-operator-86c4fc874f-zgtnc           1/1     Running   0          2m31s

(Optional) Encrypt etcd

etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
Kubernetes uses etcd to store all its data – its configuration data, its state, and its metadata. Kubernetes is a distributed system, so it needs a distributed data store like etcd. etcd lets any of the nodes in the Kubernetes cluster read and write data.

⚠️ Storing the raw encryption key in the EncryptionConfig only moderately improves your security posture, compared to no encryption. Please use kms provider for additional security.

# mkdir /etc/kubernetes/encrypt

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > /etc/kubernetes/encrypt/config.yaml <<EOF
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Update your /etc/kubernetes/manifests/kube-apiserver.yaml in the following way so kube-apiserver knows where to read the secret from:

# vim /etc/kubernetes/manifests/kube-apiserver.yaml
# diff -Nur kube-apiserver.yaml.orig /etc/kubernetes/manifests/kube-apiserver.yaml
--- kube-apiserver.yaml.orig	2021-06-29 09:17:51.093430414 +0000
+++ /etc/kubernetes/manifests/kube-apiserver.yaml	2021-06-29 09:16:54.709755099 +0000
@@ -41,6 +41,7 @@
     - --service-cluster-ip-range=10.233.0.0/18
     - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
     - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
+    - --encryption-provider-config=/etc/kubernetes/encrypt/config.yaml
     image: k8s.gcr.io/kube-apiserver:v1.21.2
     imagePullPolicy: IfNotPresent
     livenessProbe:
@@ -95,6 +96,9 @@
     - mountPath: /usr/share/ca-certificates
       name: usr-share-ca-certificates
       readOnly: true
+    - mountPath: /etc/kubernetes/encrypt
+      name: k8s-encrypt
+      readOnly: true
   hostNetwork: true
   priorityClassName: system-node-critical
   volumes:
@@ -122,4 +126,8 @@
       path: /usr/share/ca-certificates
       type: DirectoryOrCreate
     name: usr-share-ca-certificates
+  - hostPath:
+      path: /etc/kubernetes/encrypt
+      type: DirectoryOrCreate
+    name: k8s-encrypt
 status: {}
/etc/kubernetes/manifests/kube-apiserver.yaml

kube-apiserver will automatically restart when you save /etc/kubernetes/manifests/kube-apiserver.yaml file. (This can take a minute or two, be patient.)

# crictl ps | grep apiserver
10e6f4b409a4b       106ff58d43082       36 seconds ago       Running             kube-apiserver            0                   754932bb659c5

Don't forget to do the same across all your Kubernetes nodes!

Encrypt all secrets using the encryption key you have just added:

kubectl get secrets --all-namespaces -o json | kubectl replace -f -

(Optional) IPv6 support

If you wish to enable IPv6 support in your Kubernetes cluster, then please refer to this page https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/

Check your K8s DNS is working as expected

# cat dnstest.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: dnsutils
  namespace: default
spec:
  containers:
  - name: dnsutils
    image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

# kubectl apply -f dnstest.yaml

# kubectl exec -i -t dnsutils -- nslookup google.com
Server:		10.233.0.10
Address:	10.233.0.10#53

Non-authoritative answer:
Name:	google.com
Address: 142.250.75.238
Name:	google.com
Address: 2a00:1450:4007:816::200e

# kubectl exec -i -t dnsutils -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.233.0.10
options ndots:5

# kubectl delete -f dnstest.yaml

Configure Kubernetes to use gVisor

Set up the Kubernetes RuntimeClass

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc
EOF

Test Kubernetes is using gVisor

Create a Pod with the gVisor RuntimeClass:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-gvisor
spec:
  runtimeClassName: gvisor
  containers:
  - name: nginx
    image: nginx
EOF
# kubectl get pod nginx-gvisor -o wide

# kubectl exec nginx-gvisor -- dmesg
[    0.000000] Starting gVisor...
# kubectl delete pod nginx-gvisor 

If you see "Starting gVisor..." that means Kubernetes is able to run the containers using the gVisor (runsc)! We are good.

Now that we've got our Kubernetes configured, up & running, it's time to get the Akash Provider running.

Creating the Akash Provider on the Akash Blockchain

Create akash user

We are going to be running akash provider under the akash user.

sudo useradd akash -m -U -s /usr/sbin/nologin
sudo su -s /bin/bash - akash

Configure your Kubernetes for Akash provider

cd
mkdir akash-provider
cd akash-provider

wget https://raw.githubusercontent.com/ovrclk/akash/master/pkg/apis/akash.network/v1/crd.yaml
kubectl apply -f ./crd.yaml

wget https://raw.githubusercontent.com/ovrclk/akash/master/_docs/kustomize/networking/network-policy-default-ns-deny.yaml
kubectl apply -f ./network-policy-default-ns-deny.yaml

wget https://raw.githubusercontent.com/ovrclk/akash/master/_run/ingress-nginx.yaml
kubectl apply -f ./ingress-nginx.yaml

# NOTE: my Kubernetes node is called "lamborghini" and it's going to be the ingress node too. In the perfect environment that would not be the control-plan node, but rather the worker nodes.
kubectl label nodes lamborghini akash.network/role=ingress

# kubectl get nodes -o wide --show-labels 
NAME          STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME    LABELS
lamborghini   Ready    control-plane,master   52m   v1.21.2   149.202.82.160   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.4.6   akash.network/role=ingress,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=lamborghini,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=

Get a wildcard DNS record

In my case I'm going to be using <anything>.akash.nixaid.com where it would resolve to the IP of my Kubernetes node(s):

A *.akash.nixaid.com resolves to 149.202.82.160

And provider.akash.nixaid.com is going to resolve to the IP of the Akash Provider service itself that I'm going to be running. (Akash Provider service is listening over 8443/tcp port)

Install Akash client

AKASH_VERSION="$(curl -s "$AKASH_NET/version.txt")"
curl https://raw.githubusercontent.com/ovrclk/akash/master/godownloader.sh | sh -s -- "v$AKASH_VERSION"

Configure Akash client

I'm going to be using /home/akash/akash-lambo directory for storing the provider's key there.

mkdir -p "~/akash-lambo"
cd ~/akash-lambo

export KUBECONFIG=/home/akash/akash-lambo/k8s-admin.conf
export PROVIDER_DOMAIN=provider.akash.nixaid.com
export AKASH_NET="https://raw.githubusercontent.com/ovrclk/net/master/mainnet"
export AKASH_NODE="$(curl -s "$AKASH_NET/rpc-nodes.txt" | shuf -n 1)"
export AKASH_CHAIN_ID="$(curl -s "$AKASH_NET/chain-id.txt")"
export AKASH_KEYRING_BACKEND=file
export AKASH_PROVIDER_KEY=default
export AKASH_HOME="/home/akash/akash-lambo/home"
export AKASH_BOOT_KEYS=~/akash-lambo
export AKASH_FROM=$AKASH_PROVIDER_KEY

# Check what've got set:
akash@lamborghini:~/akash-lambo$ set |grep ^AKASH
AKASH_BOOT_KEYS=/home/akash/akash-lambo
AKASH_CHAIN_ID=akashnet-2
AKASH_FROM=default
AKASH_HOME=/home/akash/akash-lambo/home
AKASH_KEYRING_BACKEND=file
AKASH_NET=https://raw.githubusercontent.com/ovrclk/net/master/mainnet
AKASH_NODE=http://rpc.akash.forbole.com:80
AKASH_PROVIDER_KEY=default

Now create the default key:

akash@lamborghini:~/akash-lambo$ akash keys add $AKASH_PROVIDER_KEY --home=$AKASH_HOME --keyring-backend=$AKASH_KEYRING_BACKEND

Enter keyring passphrase:
Re-enter keyring passphrase:

- name: default
  type: local
  address: akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
...

Make sure to keep your mnemonic seed somewhere safe as it's the only way to recover your account and funds on it!

If you want to restore your key from your mnemonic seed, add --recover flag after akash keys add ... command.

Configure Akash provider

akash@lamborghini:~/akash-lambo$ cat provider.yaml 
host: https://provider.akash.nixaid.com:8443
attributes:
  - key: region
    value: europe  ## change this to your region!
  - key: host
    value: nixaid  ## change this to your host!

Fund your Akash provider's wallet

You will need about 50 AKT (Akash Token) to get you started.

Your wallet must have sufficient funding, as placing a bid on an order on the blockchain requires a 50 AKT deposit. This deposit is fully refunded after the bid is won/lost.

Purchase AKT at one of the exchanges mentioned here https://akash.network/token/

To query the balance of your wallet:

# Put here your address which you've got when created one with "akash keys add" command.
export AKASH_ACCOUNT_ADDRESS=akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0

akash@lamborghini:~/akash-lambo$ akash \
  --node "$AKASH_NODE" \
  query bank balances "$AKASH_ACCOUNT_ADDRESS"
Denomination: 1 akt = 1000000 uakt (akt*10^6)

Register your provider on the Akash Network

akash@lamborghini:~/akash-lambo$ akash tx provider create provider.yaml \
  --from $AKASH_PROVIDER_KEY \
  --home=$AKASH_HOME \
  --keyring-backend=$AKASH_KEYRING_BACKEND \
  --node=$AKASH_NODE \
  --chain-id=$AKASH_CHAIN_ID \
  --gas-prices="0.025uakt" \
  --gas="80000"
If akash tx command fails for "low gas" reason, please try using these flags --gas-prices="0.025uakt" --gas="auto" --gas-adjustment=1.15 and remove -gas="80000".
If you want to change the parameters of your provider.yaml then use akash tx provider update command with the same arguments.

After registering your provider on the Akash Network, I was able to see my host there:

akash@lamborghini:~/akash-lambo$ akash \
  --node "$AKASH_NODE" \
  query provider list -o json | jq -r '.providers[] | [ .attributes[].value, .host_uri, .owner ] | @csv' | sort -d
"australia-east-akash-provider","https://provider.akashprovider.com","akash1ykxzzu332txz8zsfew7z77wgsdyde75wgugntn"
"equinix-metal-ams1","akash","mn2-0","https://provider.ams1p0.mainnet.akashian.io:8443","akash1ccktptfkvdc67msasmesuy5m7gpc76z75kukpz"
"equinix-metal-ewr1","akash","mn2-0","https://provider.ewr1p0.mainnet.akashian.io:8443","akash1f6gmtjpx4r8qda9nxjwq26fp5mcjyqmaq5m6j7"
"equinix-metal-sjc1","akash","mn2-0","https://provider.sjc1p0.mainnet.akashian.io:8443","akash10cl5rm0cqnpj45knzakpa4cnvn5amzwp4lhcal"
"equinix-metal-sjc1","akash","mn2-0","https://provider.sjc1p1.mainnet.akashian.io:8443","akash1cvpefa7pw8qy0u4euv497r66mvgyrg30zv0wu0"
"europe","nixaid","https://provider.akash.nixaid.com:8443","akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0"
"us-west-demo-akhil","dehazelabs","https://73.157.111.139:8443","akash1rt2qk45a75tjxzathkuuz6sq90jthekehnz45z"
"us-west-demo-caleb","https://provider.akashian.io","akash1rdyul52yc42vd8vhguc0t9ryug9ftf2zut8jxa"
"us-west-demo-daniel","https://daniel1q84.iptime.org","akash14jpkk4n5daspcjdzsrylgw38lj9xug2nznqnu2"
"us-west","https://ssymbiotik.ekipi365.com","akash1j862g3efcw5xcvn0402uwygrwlzfg5r02w9jw5"

Create the provider certificate

You must issue a transaction to the blockchain to create a certificate associated with your provider:

akash@lamborghini:~/akash-lambo$ akash tx cert create server $PROVIDER_DOMAIN \
  --chain-id $AKASH_CHAIN_ID \
  --keyring-backend $AKASH_KEYRING_BACKEND \
  --from $AKASH_PROVIDER_KEY \
  --home=$AKASH_HOME \
  --node=$AKASH_NODE \
  --fees 1000uakt

Starting the Akash Provider

Akash provider will need the Kubernetes admin config, so move it somewhere where it can read it:

# cp /etc/kubernetes/admin.conf /home/akash/akash-lambo/k8s-admin.conf
# chown akash:akash /home/akash/akash-lambo/k8s-admin.conf

Create start-provider.sh file which will be starting the Akash Provider.
But before, create the key-pass.txt file with the password you have set when created the provider's key.

echo "Your-passWoRd" | tee /home/akash/akash-lambo/key-pass.txt
For some reason it worked only with http://135.181.60.250:26657 provider out of the curl -s "$AKASH_NET/rpc-nodes.txt" list.
# cat /home/akash/akash-lambo/start-provider.sh
#!/usr/bin/env bash

cd /home/akash/akash-lambo
( sleep 2s; cat key-pass.txt; cat key-pass.txt ) | \
  /home/akash/bin/akash provider run \
  --home /home/akash/akash-lambo/home \
  --chain-id akashnet-2 \
  --node http://135.181.60.250:26657 \
  --keyring-backend=file \
  --from default \
  --fees 1000uakt \
  --kubeconfig /home/akash/akash-lambo/k8s-admin.conf \
  --cluster-k8s true \
  --deployment-ingress-domain provider.akash.nixaid.com \
  --deployment-ingress-static-hosts true \
  --bid-price-strategy scale \
  --bid-price-cpu-scale 0.001 \
  --bid-price-memory-scale 0.001 \
  --bid-price-storage-scale 0.00001 \
  --bid-price-endpoint-scale 0 \
  --bid-deposit 5000000uakt \
  --cluster-node-port-quantity 1000 \
  --cluster-public-hostname provider.akash.nixaid.com

Make sure it's executable:

chmod +x /home/akash/akash-lambo/start-provider.sh

Create akash-provider.service systemd service so Akash provider starts automatically:

# cat /etc/systemd/system/akash-provider.service
[Unit]
Description=Akash Provider
After=network.target

[Service]
User=akash
Group=akash
ExecStart=/home/akash/akash-lambo/start-provider.sh
KillSignal=SIGINT
Restart=on-failure
RestartSec=15
StartLimitInterval=200
StartLimitBurst=10
#LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

Start the Akash provider:

systemctl daemon-reload
systemctl start akash-provider
systemctl enable akash-provider

Check the logs:

journalctl -u akash-provider --since '5 min ago' -f

Akash detects the node as following:

D[2021-06-29|11:33:34.190] node resources                               module=provider-cluster cmp=service cmp=inventory-service node-id=lamborghini available-cpu="units:<val:\"7050\" > attributes:<key:\"arch\" value:\"amd64\" > " available-memory="quantity:<val:\"32896909312\" > " available-storage="quantity:<val:\"47409223602\" > "

cpu units: 7050 / 1000 = 7 CPU (server actually's got 8 CPU, it must have reserved 1 CPU for whatever the provider node is running, which is a smart thing)
available memory: 32896909312 / (1024^3) = 30.63Gi (server's got 32Gi RAM)
available storage: 47409223602 / (1024^3) = 44.15Gi (here is a bit weird, I've got just 32Gi available on rootfs "/")

Deploying on our own Akash provider

In order to get your Akash client configured on your client side, please refer to the first 4 steps in https://nixaid.com/solana-on-akashnet/ or https://docs.akash.network/guides/deploy

Now that we have our own Akash Provider running, let's try to deploy something on it.
I'll deploy the echoserver service which can return interesting information to the client once queried over the HTTP/HTTPS port.

$ cat web.yml
---
version: "2.0"

services:
  web:
    image: gcr.io/google_containers/echoserver:1.10
    expose:
      - port: 8080
        as: 80
        to:
          - global: true

profiles:
  compute:
    web:
      resources:
        cpu:
          units: 0.1
        memory:
          size: 512Mi
        storage:
          size: 512Mi
  placement:
    akash:
      #attributes:
      #  host: nixaid
      #signedBy:
      #  anyOf:
      #    - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" ## AKASH
      pricing:
        web: 
          denom: uakt
          amount: 2000

deployment:
  web:
    akash:
      profile: web
      count: 1

Note that I've commented signedBy directive which typically is used by the clients to make sure they are deploying on a trusted provider. Leaving it commented, means that you can deploy at any Akash provider you want, not necessarily signed.

You can use the akash tx audit attr create command for signing attributes on your Akash Provider if you wish your clients to use signedBy directive.

akash tx deployment create web.yml \
  --from default \
  --node $AKASH_NODE \
  --chain-id $AKASH_CHAIN_ID \
  --fees 1000uakt

Now that the deployment has been announced to the Akash network, let's look at our Akash Provider's side.

Here is what a successful reservation looks like from the Akash provider's point of view:

Reservation fulfilled is what we are looking for.
Jun 30 00:00:46 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:46.122] syncing sequence                             cmp=client/broadcaster local=31 remote=31
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:53.837] order detected                               module=bidengine-service order=order/akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:53.867] group fetched                                module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:53.867] requesting reservation                       module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:00:53.868] reservation requested                        module=provider-cluster cmp=service cmp=inventory-service order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1 resources="group_id:<owner:\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\" dseq:1585829 gseq:1 > state:open group_spec:<name:\"akash\" requirements:<signed_by:<> > resources:<resources:<cpu:<units:<val:\"100\" > > memory:<quantity:<val:\"134217728\" > > storage:<quantity:<val:\"134217728\" > > endpoints:<> > count:1 price:<denom:\"uakt\" amount:\"2000\" > > > created_at:1585832 "
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:53.868] Reservation fulfilled                        module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:00:53.868] submitting fulfillment                       module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1 price=357uakt
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:53.932] broadcast response                           cmp=client/broadcaster response="Response:\n  TxHash: BDE0FE6CD12DB3B137482A0E93D4099D7C9F6A5ABAC597E17F6E94706B84CC9A\n  Raw Log: []\n  Logs: []" err=null
Jun 30 00:00:53 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:53.932] bid complete                                 module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:00:56 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:00:56.121] syncing sequence                             cmp=client/broadcaster local=32 remote=31

Now that the Akash provider's got reservation fulfilled, we should be able to see it as a bid (offer) on the client side:

$ akash query market bid list   --owner=$AKASH_ACCOUNT_ADDRESS   --node $AKASH_NODE   --dseq $AKASH_DSEQ   
...
- bid:
    bid_id:
      dseq: "1585829"
      gseq: 1
      oseq: 1
      owner: akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h
      provider: akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
    created_at: "1585836"
    price:
      amount: "357"
      denom: uakt
    state: open
  escrow_account:
    balance:
      amount: "50000000"
      denom: uakt
    id:
      scope: bid
      xid: akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
    owner: akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
    settled_at: "1585836"
    state: open
    transferred:
      amount: "0"
      denom: uakt
...

Let's create the leases now (accept the bid offered by the Akash Provider):

akash tx market lease create \
  --chain-id $AKASH_CHAIN_ID \
  --node $AKASH_NODE \
  --owner $AKASH_ACCOUNT_ADDRESS \
  --dseq $AKASH_DSEQ \
  --gseq $AKASH_GSEQ \
  --oseq $AKASH_OSEQ \
  --provider $AKASH_PROVIDER \
  --from default \
  --fees 1000uakt

Now we can see "lease won" at the provider's site:

Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:03:42.479] ignoring group                               module=bidengine-order order=akash15yd3qszmqausvzpj7n0y0e4pft2cu9rt5gccda/1346631/1/1 group=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:03:42.479] lease won                                    module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1 lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:03:42.480] shutting down                                module=bidengine-order order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:03:42.480] lease won                                    module=provider-manifest lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:03:42.480] new lease                                    module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:03:42.480] emit received events skipped                 module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 data=<nil> leases=1 manifests=0
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:03:42.520] data received                                module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 version=77fd690d5e5ec8c320a902da09a59b48dc9abd0259d84f9789fee371941320e7
Jun 30 00:03:42 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:03:42.520] emit received events skipped                 module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 data="deployment:<deployment_id:<owner:\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\" dseq:1585829 > state:active version:\"w\\375i\\r^^\\310\\303 \\251\\002\\332\\t\\245\\233H\\334\\232\\275\\002Y\\330O\\227\\211\\376\\343q\\224\\023 \\347\" created_at:1585832 > groups:<group_id:<owner:\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\" dseq:1585829 gseq:1 > state:open group_spec:<name:\"akash\" requirements:<signed_by:<> > resources:<resources:<cpu:<units:<val:\"100\" > > memory:<quantity:<val:\"134217728\" > > storage:<quantity:<val:\"134217728\" > > endpoints:<> > count:1 price:<denom:\"uakt\" amount:\"2000\" > > > created_at:1585832 > escrow_account:<id:<scope:\"deployment\" xid:\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829\" > owner:\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\" state:open balance:<denom:\"uakt\" amount:\"5000000\" > transferred:<denom:\"uakt\" amount:\"0\" > settled_at:1585859 > " leases=1 manifests=0

Send the manifest to finally deploy the echoserver service on your Akash Provider!

akash provider send-manifest web.yml \
  --node $AKASH_NODE \
  --dseq $AKASH_DSEQ \
  --provider $AKASH_PROVIDER \
  --from default

Provider's got the manifest => "manifest received", and kube-builder module has "created service" under c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76 namespace:

Jun 30 00:06:16 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:06:16.122] syncing sequence                             cmp=client/broadcaster local=32 remote=32
Jun 30 00:06:21 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:21.413] inventory fetched                            module=provider-cluster cmp=service cmp=inventory-service nodes=1
Jun 30 00:06:21 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:21.413] node resources                               module=provider-cluster cmp=service cmp=inventory-service node-id=lamborghini available-cpu="units:<val:\"7050\" > attributes:<key:\"arch\" value:\"amd64\" > " available-memory="quantity:<val:\"32896909312\" > " available-storage="quantity:<val:\"47409223602\" > "
Jun 30 00:06:26 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:06:26.122] syncing sequence                             cmp=client/broadcaster local=32 remote=32
Jun 30 00:06:35 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:06:35.852] manifest received                            module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829
Jun 30 00:06:35 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:35.852] requests valid                               module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 num-requests=1
Jun 30 00:06:35 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:35.853] publishing manifest received                 module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 num-leases=1
Jun 30 00:06:35 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:35.853] publishing manifest received for lease       module=manifest-manager deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829 lease_id=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun 30 00:06:35 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:06:35.853] manifest received                            module=provider-cluster cmp=service
Jun 30 00:06:36 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:36.023] provider/cluster/kube/builder: created service module=kube-builder service="&Service{ObjectMeta:{web      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[akash.network:true akash.network/manifest-service:web akash.network/namespace:c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76] map[] [] []  []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:0-80,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{akash.network: true,akash.network/manifest-service: web,akash.network/namespace: c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76,},ClusterIP:,Type:ClusterIP,ExternalIPs:[],SessionAffinity:,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}"
Jun 30 00:06:36 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:06:36.121] syncing sequence                             cmp=client/broadcaster local=32 remote=32
Jun 30 00:06:36 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:36.157] provider/cluster/kube/builder: created rules module=kube-builder rules="[{Host:623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com IngressRuleValue:{HTTP:&HTTPIngressRuleValue{Paths:[]HTTPIngressPath{HTTPIngressPath{Path:/,Backend:IngressBackend{Resource:nil,Service:&IngressServiceBackend{Name:web,Port:ServiceBackendPort{Name:,Number:80,},},},PathType:*Prefix,},},}}}]"
Jun 30 00:06:36 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:06:36.222] deploy complete                              module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash

Let's see the lease status from the client side:

akash provider lease-status \
  --node $AKASH_NODE \
  --dseq $AKASH_DSEQ \
  --provider $AKASH_PROVIDER \
  --from default

{
  "services": {
    "web": {
      "name": "web",
      "available": 1,
      "total": 1,
      "uris": [
        "623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com"
      ],
      "observed_generation": 1,
      "replicas": 1,
      "updated_replicas": 1,
      "ready_replicas": 1,
      "available_replicas": 1
    }
  },
  "forwarded_ports": {}
}

We've got it!
Let's query it:

$ curl 623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com


Hostname: web-5c6f84887-6kh9p

Pod Information:
  -no pod information available-

Server values:
  server_version=nginx: 1.13.3 - lua: 10008

Request Information:
  client_address=10.233.85.136
  method=GET
  real path=/
  query=
  request_version=1.1
  request_scheme=http
  request_uri=http://623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com:8080/

Request Headers:
  accept=*/*
  host=623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com
  user-agent=curl/7.68.0
  x-forwarded-for=CLIENT_IP_REDACTED
  x-forwarded-host=623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com
  x-forwarded-port=80
  x-forwarded-proto=http
  x-real-ip=CLIENT_IP_REDACTED
  x-request-id=8cdbcd7d0c4f42440669f7396e206cae
  x-scheme=http

Request Body:
  -no body in request-

Our deployment on our own Akash provider is working as expected! Hooray!

Let's see how does our deployment is actually looking from the Kubernetes point of view on our Akash Provider:

# kubectl get all --all-namespaces -l akash.network=true
NAMESPACE                                       NAME                      READY   STATUS    RESTARTS   AGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76   pod/web-5c6f84887-6kh9p   1/1     Running   0          2m37s

NAMESPACE                                       NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76   service/web   ClusterIP   10.233.47.15   <none>        80/TCP    2m37s

NAMESPACE                                       NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76   deployment.apps/web   1/1     1            1           2m38s

NAMESPACE                                       NAME                            DESIRED   CURRENT   READY   AGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76   replicaset.apps/web-5c6f84887   1         1         1       2m37s

# kubectl get ing --all-namespaces 
NAMESPACE                                       NAME   CLASS    HOSTS                                                  ADDRESS     PORTS   AGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76   web    <none>   623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com   localhost   80      8m47s

# kubectl -n c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76 describe ing web
Name:             web
Namespace:        c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76
Address:          localhost
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                                                  Path  Backends
  ----                                                  ----  --------
  623n1u4k2hbiv6f1kuiscparqk.provider.akash.nixaid.com  
                                                        /   web:80 (10.233.85.137:8080)
Annotations:                                            <none>
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  Sync    8m9s (x2 over 9m5s)  nginx-ingress-controller  Scheduled for sync


# crictl pods
POD ID              CREATED             STATE               NAME                                        NAMESPACE                                       ATTEMPT             RUNTIME
4c22dba05a2c0       5 minutes ago       Ready               web-5c6f84887-6kh9p                         c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76   0                   runsc
...

The client can read his deployment's logs too:

akash \
  --node "$AKASH_NODE" \
  provider lease-logs \
  --dseq "$AKASH_DSEQ" \
  --gseq "$AKASH_GSEQ" \
  --oseq "$AKASH_OSEQ" \
  --provider "$AKASH_PROVIDER" \
  --from default \
  --follow

[web-5c6f84887-6kh9p] Generating self-signed cert
[web-5c6f84887-6kh9p] Generating a 2048 bit RSA private key
[web-5c6f84887-6kh9p] ..............................+++
[web-5c6f84887-6kh9p] ...............................................................................................................................................+++
[web-5c6f84887-6kh9p] writing new private key to '/certs/privateKey.key'
[web-5c6f84887-6kh9p] -----
[web-5c6f84887-6kh9p] Starting nginx
[web-5c6f84887-6kh9p] 10.233.85.136 - - [30/Jun/2021:00:08:00 +0000] "GET / HTTP/1.1" 200 744 "-" "curl/7.68.0"
[web-5c6f84887-6kh9p] 10.233.85.136 - - [30/Jun/2021:00:27:10 +0000] "GET / HTTP/1.1" 200 744 "-" "curl/7.68.0"

After done testing, it's time to close the deployment:

akash tx deployment close \
  --node $AKASH_NODE \
  --chain-id $AKASH_CHAIN_ID \
  --dseq $AKASH_DSEQ \
  --owner $AKASH_ACCOUNT_ADDRESS \
  --from default \
  --fees 1000uakt

Provider's sees it as expected "deployment closed", "teardown request", ...:

Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.828] deployment closed                            module=provider-manifest deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.828] manager done                                 module=provider-manifest deployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.829] teardown request                             module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.830] shutting down                                module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash cmp=deployment-monitor
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.830] shutdown complete                            module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash cmp=deployment-monitor
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.837] teardown complete                            module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.837] waiting on dm.wg                             module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.838] waiting on withdrawal                        module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.838] shutting down                                module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash cmp=deployment-withdrawal
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.838] shutdown complete                            module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash cmp=deployment-withdrawal
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.838] shutdown complete                            module=provider-cluster cmp=service cmp=deployment-manager lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0 manifest-group=akash
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.838] manager done                                 module=provider-cluster cmp=service lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: D[2021-06-30|00:28:44.838] unreserving capacity                         module=provider-cluster cmp=service cmp=inventory-service order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.838] attempting to removing reservation           module=provider-cluster cmp=service cmp=inventory-service order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.838] removing reservation                         module=provider-cluster cmp=service cmp=inventory-service order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:28:44 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:44.838] unreserve capacity complete                  module=provider-cluster cmp=service cmp=inventory-service order=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun 30 00:28:46 lamborghini start-provider.sh[1029866]: I[2021-06-30|00:28:46.122] syncing sequence                             cmp=client/broadcaster local=36 remote=36

Tearing down the cluster

Just in case if you want to destroy your Kubernetes cluster:

systemctl disable akash-provider
systemctl stop akash-provider

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets

kubectl delete node <node name>
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t nat -X && iptables -t mangle -F && iptables -t mangle -X  && iptables -t raw -F && iptables -t raw -X && iptables -X
ip6tables -F && ip6tables -t nat -F && ip6tables -t nat -X && ip6tables -t mangle -F && ip6tables -t mangle -X && ip6tables -t raw -F && ip6tables -t raw -X && ip6tables -X
ipvsadm -C

## if Weave Net was used:
weave reset (if you  used)  (( or "ip link delete weave" ))

## if Calico was used:
ip link
ip link delete cali*
ip link delete vxlan.calico

A bit of troubleshooting / getting out of the following situation:

## if getting during "crictl rmp -a" (deleting all pods using crictl)
removing the pod sandbox "f89d5f4987fbf80790e82eab1f5634480af814afdc82db8bca92dc5ed4b57120": rpc error: code = Unknown desc = sandbox network namespace "/var/run/netns/cni-65fbbdd0-8af6-8c2a-0698-6ef8155ca441" is not fully closed

ip netns ls
ip -all netns delete

ps -ef|grep -E 'runc|runsc|shim'
ip r
pidof runsc-sandbox |xargs -r kill
pidof /usr/bin/containerd-shim-runc-v2 |xargs -r kill -9
find /run/containerd/io.containerd.runtime.v2.task/ -ls

rm -rf /etc/cni/net.d

systemctl restart containerd
systemctl restart docker

Donate

Please consider donating to me if you found this article useful.

AKT, BTC, ATOM, ETH, ALGO, XLM, ADA, CRO, XMR, BNB, ZEC, etc -- please email me or DM me on Twitter https://twitter.com/andreyarapov for the donation address.

References

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver
https://storage.googleapis.com/kubernetes-release/release/stable.txt
https://gvisor.dev/docs/user_guide/containerd/quick_start/
https://github.com/containernetworking/cni#how-do-i-use-cni
https://docs.projectcalico.org/getting-started/kubernetes/quickstart
https://kubernetes.io/docs/concepts/overview/components/
https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
https://docs.akash.network/operator/provider
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down