The recommended way to provision a baremetal Kubernetes cluster is Kubespay.
Make sure all of your future cluster nodes are accessible and have your SSH key provisioned.
To provision the cluster, perform the following steps:
git clone [email protected]:kubernetes-sigs/kubespray.git
cd kubespray
git checkout v2.10.4
pip3 install -r requirements.txt
cp -rfp inventory/sample inventory/*CLUSTER_NAME*
declare -a IPS=(*IPS_OF_MACHINES*)
CONFIG_FILE=inventory/*CLUSTER_NAME*/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
kube_apiserver_ip
in inventory/*CLUSTER_NAME*/group_vars/k8s-cluster/k8s-cluster.yml
to IP of your preferred master nodelibselinux-python
in roles/bootstrap-os/tasks/bootstrap-centos.yml
Frompackage:
name: libselinux-python
state: present
to
raw:
yum install libselinux-python
local_volume_provisioner_enabled
field in inventory/pre-prod/group_vars/k8s-cluster/addons.yml
:local_volume_provisioner_enabled: true
local_volume_provisioner_namespace: kube-system
local_volume_provisioner_storage_classes:
local-storage:
host_dir: /mnt/k8s-volumes
mount_dir: /mnt/volumes
inventory/pre-prod/group_vars/k8s-cluster/k8s-cluster.yml
file and enable persistent volumes:kube_apiserver_ip: "*IPS_OF_MACHINES*" # "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
persistent_volumes_enabled: true
ansible-playbook -i inventory/pre-prod/hosts.yml --become --become-user=root -u root cluster.yml
Copy /etc/kubernetes/admin.conf
from the cluster node to your host one and configure kubectl
to use use it by running export KUBECONFIG=*path_to_your_config*
Connect to your Kubernetes cluster and run perform the following steps:
helm init --force-upgrade
curl -o ingress.yml https://raw.githubusercontent.com/helm/charts/master/stable/nginx-ingress/values.yaml
< hostNetwork: false
---
> hostNetwork: true
< dnsPolicy: ClusterFirst
---
> dnsPolicy: ClusterFirstWithHostNet
< reportNodeInternalIp: false
---
> reportNodeInternalIp: true
< useHostPort: false
---
> useHostPort: true
< http: 80
< https: 443
---
> http: 30001
> https: 30002
< type: LoadBalancer
---
> # type: LoadBalancer
< # type: NodePort
< # nodePorts:
< # http: 32080
< # https: 32443
< # tcp:
< # 8080: 32808
---
> type: NodePort
< http: ""
< https: ""
< tcp: {}
< udp: {}
---
> http: 32080
> https: 32443
helm install --name ingress stable/nginx-ingress -f ingress.yml
Congratulations, your infrastructure is now ready for the deployment!
To configure the Vault backend(e.g. OVH Openstack Swift), edit config/environments/*env*/vault.yml
:
storage:
swift:
auth_url: "https://auth.cloud.ovh.net/v2.0"
container: "CONTAINER_NAME"
username: changeme
password: changeme
tenant: "changeme"
region: "REGION_NAME" # Should be uppercase
tenant_id: "changeme"
You can find more information on Openstack Swift Vault backend here
Barong database settings can be configured in config/environments/*env*/barong.yml
:
db:
name: changeme
user: changeme
password: changeme
host: changeme # SQL hostname(e.g. 42.1.33.7)
port: "changeme" # Usually 3306
To access GCR, create a service account with correct access rights and add a pull secret to the cluster:
kubectl create secret docker-registry pull-gcr --docker-server=https://gcr.io --docker-username=_json_key [email protected] --docker-password="$(cat *PATH_TO_JSON_FILE*)" -n *deployment_id*-app
Add the secret name to the configuration of any component that needs to use a private image pullSecret: pull-gcr
.