Openshift origin multi-node installation guide

in this tutorial, i will describe how to deploy an openshift multi-node cluster manually.
before we start, let’s clarify some basic requirements and deployment topology.

one master node, virtual box vm, 2 cores + 2G mem + 20G disk
two nodes, virtual box vm, 2 cores + 2G mem + 20G disk

master hostname: master.openshift.shaunos.com ip address: 192.168.2.49
node1 hostname: node1.openshift.shaunos.com ip address: 192.168.2.50
node2 hostname: node2.openshift.shaunos.com ip address: 192.168.2.56

All three vms with centos 7.2(1511) minimal installation are on the same subnet, static ip address.

for openshift oauth, i use anypassword auth plugin
for openshift network, i use sdn muilti-tenant plugin

Okay, let’s start

1, disable default firewalld service on all hosts

systemctl stop firewalld
systemctl disable firewalld

2, install basic packages needed, on all hosts.

yum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion curl vim openssl

3, we need a dns service, if you already have one, you can skip this step, i use named and install it on master node.

yum install -y bind

vim /etc/named.conf, modify following two lines,

allow-query     { 0.0.0.0/0; };
listen-on port 53 { any; };

vim /etc/named.rfc1912.zones , append following contents,

zone "openshift.shaunos.com" IN {
    type master;
    file "named.openshift.shaunos.com";
    allow-update { none; };
};

zone "2.168.192.in-addr.arpa" IN {
        type master;
        file "192.168.2.arpa";
        allow-update { none; };
};

vim /var/named/named.openshift.shaunos.com, create named.openshift.shaunos.com file,

$TTL 1D
@       IN SOA openshift.shaunos.com. rname.invalid. (
                                        0       ; serial  
                                        1D      ; refresh  
                                        1H      ; retry  
                                        1W      ; expire  
                                        3H )    ; minimum  
@       IN NS @
        A  192.168.2.49
master  IN A 192.168.2.49
node1  IN A 192.168.2.50
node2  IN A 192.168.2.56

vim /var/named/192.168.2.arpa , create 192.168.2.arpa file,

$TTL 1D
@       IN SOA  openshift.shaunos.com. rname.invalid. (
                                        0       ; serial  
                                        1D      ; refresh  
                                        1H      ; retry  
                                        1W      ; expire  
                                        3H )    ; minimum  
        NS      @
        AAAA    ::1
49      PTR      192.168.2.49.openshift.shaunos.com.
50      PTR      192.168.2.50.openshift.shaunos.com.
56      PTR      192.168.2.56.openshift.shaunos.com.

enable and start named service

systemctl enable named
systemctl start named
systemctl status named

add 192.168.2.49 in /etc/resolv.conf

nameserver 192.168.2.49

4, install and setup docker, add parameters to docker daemon, on all nodes:

yum install -y docker

vim /etc/sysconfig/docker

OPTIONS=' --selinux-enabled --log-driver=json-file --log-opt max-size=50m'

do not start docker service at this time.
5, install origin package source, on all nodes:

yum install -y centos-release-openshift-origin

6, on master node, install master packages, and setup master configuration

yum install -y origin-master origin-pod origin-sdn-ovs origin-dockerregistry

vim /etc/origin/master/master-config.yaml,
append following contents at corsAllowedOrigins section,

- kubernetes.default
- kubernetes.default.svc.cluster.local
- kubernetes
- openshift.default
- openshift.default.svc
- 172.30.0.1
- openshift.default.svc.cluster.local
- kubernetes.default.svc
- openshift

set scheduler config file:

schedulerConfigFile: "/etc/origin/master/scheduler.json"

set network plugin

networkPluginName: "redhat/openshift-ovs-multitenant"

set subdomain

routingConfig:
subdomain: openshift.shaunos.com

create scheduler.json file:
vim /etc/origin/master/scheduler.json

{
    "apiVersion": "v1",
    "kind": "Policy",
    "predicates": [
        {
            "name": "MatchNodeSelector"
        },
        {
            "name": "PodFitsResources"
        },
        {
            "name": "PodFitsPorts"
        },
        {
            "name": "NoDiskConflict"
        },
        {
            "name": "NoVolumeZoneConflict"
        },
        {
            "name": "MaxEBSVolumeCount"
        },
        {
            "name": "MaxGCEPDVolumeCount"
        },
        {
            "argument": {
                "serviceAffinity": {
                    "labels": [
                        "region"
                    ]
                }
            },
            "name": "Region"
        }
    ],
    "priorities": [
        {
            "name": "LeastRequestedPriority",
            "weight": 1
        },
        {
            "name": "SelectorSpreadPriority",
            "weight": 1
        },
        {
            "argument": {
                "serviceAntiAffinity": {
                    "label": "zone"
                }
            },
            "name": "Zone",
            "weight": 2
        }
    ]
}

save and exit.
enable and start origin-master service

systemctl enable origin-master
systemctl start origin-master

create kube config file,

mkdir .kube
ln -s /etc/origin/master/admin.kubeconfig .kube/config

if you can login with system admin, it works.
oc login -u system:admin

we do not put any pod in master node, so there is no need to start docker and iptables services on master node.
7, on all nodes except master, install node packages

yum install -y origin-node origin-pod origin-sdn-ovs origin-dockerregistry

8, on master node, generate node configuration files
create node config directory

mkdir /etc/origin/node1.openshift.shaunos.com
mkdir /etc/origin/node2.openshift.shaunos.com

ln -s /etc/origin/ openshift.local.config

create node config file

oc adm create-node-config --node-dir='/etc/origin/node1.openshift.shaunos.com/' --dns-domain='openshift.shaunos.com' --dns-ip='192.168.2.49' --hostnames='node1.openshift.shaunos.com' --master='https://192.168.2.49:8443' --network-plugin='redhat/openshift-ovs-multitenant' --node='node1.openshift.shaunos.com'

oc adm create-node-config --node-dir='/etc/origin/node2.openshift.shaunos.com/' --dns-domain='openshift.shaunos.com' --dns-ip='192.168.2.49' --hostnames='node2.openshift.shaunos.com' --master='https://192.168.2.49:8443' --network-plugin='redhat/openshift-ovs-multitenant' --node='node2.openshift.shaunos.com'

copy config file to nodes

scp /etc/origin/node1.openshift.shaunos.com/* root@192.168.2.50:/etc/origin/node/
scp /etc/origin/node2.openshift.shaunos.com/* root@192.168.2.56:/etc/origin/node/

9, setup node, on all nodes except master
vim /etc/origin/node/node-config.yaml, add following contents after kind section

kubeletArguments:
  node-labels:
  - region=primary
  - zone=west

copy openshift root certificates to system trusted file list

copy contents of /etc/origin/node/ca.crt
and append it to the end of /etc/ssl/certs/ca-bundle.crt

add 192.168.2.49 to /etc/resolv.conf

nameserver 192.168.2.49

enable and start iptables, docker, node services

systemctl enable iptables
systemctl start iptables

systemctl enable docker
systemctl start docker

systemctl enable origin-node
systemctl start origin-node

10, pull docker images needed, on all nodes except master

docker pull openshift/origin-sti-builder      
docker pull openshift/origin-deployer          
docker pull openshift/origin-docker-registry   
docker pull openshift/origin-haproxy-router   
docker pull openshift/origin-pod 

11, setup registry service, on master node
create registry serviceaccount

oc create serviceaccount registry -n default

add scc to registry serviceaccount

oadm policy add-scc-to-user privileged system:serviceaccount:default:registry

create registry service

oadm registry --service-account=registry --mount-host=/opt/openshift-registry

ignore any errors occured
create route for docker-registry service

oc create route passthrough --service docker-registry -n default 

get service ip and route name

oc get svc
oc get route

use docker-registry service cluster ip and route to create certificate

oc adm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt  --signer-key=/etc/origin/master/ca.key --signer-serial=/etc/origin/master/ca.serial.txt --hostnames="172.30.164.93,docker-registry-default.openshift.shaunos.com" --cert=/etc/origin/master/registry.crt --key=/etc/origin/master/registry.key

create registry-certificates secrets

oc secrets new registry-certificates /etc/origin/master/registry.crt /etc/origin/master/registry.key -n default

add this certificates to registry and default serviceaccount

oc secrets add registry registry-certificates -n default
oc secrets add default registry-certificates -n default

update deployment config of docker-registry to use ssl

oc env dc/docker-registry REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key -n default
oc patch dc/docker-registry --api-version=v1 -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","livenessProbe":{"httpGet":{"scheme":"HTTPS"}}}]}}}}'  -n default
oc patch dc/docker-registry --api-version=v1 -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","readinessProbe":{"httpGet":{"scheme":"HTTPS"}}}]}}}}'  -n default
oc volume dc/docker-registry --add --type=secret --secret-name=registry-certificates -m /etc/secrets -n default

12, create route service, on master node

oc create serviceaccount router -n default
oadm policy add-scc-to-user hostnetwork system:serviceaccount:default:router
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router
oadm router router --replicas=1 --service-account=router

13, on all nodes except master, open ports needed

iptables -N OS_FIREWALL_ALLOW
iptables -I INPUT 8 -j OS_FIREWALL_ALLOW
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 10250 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p udp -m udp --dport 10250 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 10255 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 80 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 443 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p udp -m udp --dport 4789 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p udp -m udp --dport 10255 -j ACCEPT

14, create image streams, on master node
download and exact the temp.tar file, cd v1.3/

for f in image-streams/image-streams-centos7.json; do cat $f | oc create -n openshift -f -; done
for f in db-templates/*.json; do cat $f | oc create -n openshift -f -; done
for f in quickstart-templates/*.json; do cat $f | oc create -n openshift -f -; done

15, confirm the installation is okay.
on master node,

oc get po -o wide
[root@master ~]# oc get po -o wide
NAME                      READY     STATUS    RESTARTS   AGE       IP             NODE
docker-registry-2-6ifmx   1/1       Running   0          16m       10.129.0.2     node1.openshift.shaunos.com
router-1-5u540            1/1       Running   0          33s       192.168.2.50   node1.openshift.shaunos.com
[root@master ~]# 
[root@master ~]# oc status
In project default on server https://192.168.2.49:8443

https://docker-registry-default.openshift.shaunos.com (passthrough) to pod port 5000-tcp (svc/docker-registry)
  dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v1.3.1 
    deployment #2 deployed 35 minutes ago - 1 pod
    deployment #1 failed 52 minutes ago: newer deployment was found running

svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053

svc/router - 172.30.143.94 ports 80, 443, 1936
  dc/router deploys docker.io/openshift/origin-haproxy-router:v1.3.1 
    deployment #1 deployed 19 minutes ago - 1 pod

View details with 'oc describe /' or list everything with 'oc get all'.

16, last, one thing to be done.
when i create docker-registry service, i use a host path /opt/openshift-registry to store docker images if you did as me, change the ownershipe of /opt/openshift-registry directory to 1001:root on the node of docker-registry pod exists. for my instance, my docker registry is running on node1, so on node1, run

chown 1001:root /opt/openshift-registry

if you use other storage strategy, no need to do this.

31 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *