Openshift origin multi-node installation guide

in this tutorial, i will describe how to deploy an openshift multi-node cluster manually.
before we start, let’s clarify some┬ábasic requirements and deployment┬átopology.

one master node, virtual box vm, 2 cores + 2G mem + 20G disk
two nodes, virtual box vm, 2 cores + 2G mem + 20G disk

master hostname: ip address:
node1 hostname: ip address:
node2 hostname: ip address:

All three vms with centos 7.2(1511) minimal installation are on the same subnet, static ip address.

for openshift oauth, i use anypassword auth plugin
for openshift network, i use sdn muilti-tenant plugin

Okay, let’s start

1, disable default firewalld service on all hosts

systemctl stop firewalld
systemctl disable firewalld

2, install basic packages needed, on all hosts.

yum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion curl vim openssl

3, we need a dns service, if you already have one, you can skip this step, i use named and install it on master node.

yum install -y bind

vim /etc/named.conf, modify following two lines,

allow-query     {; };
listen-on port 53 { any; };

vim /etc/named.rfc1912.zones , append following contents,

zone "" IN {
    type master;
    file "";
    allow-update { none; };

zone "" IN {
        type master;
        file "";
        allow-update { none; };

vim /var/named/, create file,

@       IN SOA rname.invalid. (
                                        0       ; serial  
                                        1D      ; refresh  
                                        1H      ; retry  
                                        1W      ; expire  
                                        3H )    ; minimum  
@       IN NS @
master  IN A
node1  IN A
node2  IN A

vim /var/named/ , create file,

@       IN SOA rname.invalid. (
                                        0       ; serial  
                                        1D      ; refresh  
                                        1H      ; retry  
                                        1W      ; expire  
                                        3H )    ; minimum  
        NS      @
        AAAA    ::1
49      PTR
50      PTR
56      PTR

enable and start named service

systemctl enable named
systemctl start named
systemctl status named

add in /etc/resolv.conf


4, install and setup docker, add parameters to docker daemon, on all nodes:

yum install -y docker

vim /etc/sysconfig/docker

OPTIONS=' --selinux-enabled --log-driver=json-file --log-opt max-size=50m'

do not start docker service at this time.
5, install origin package source, on all nodes:

yum install -y centos-release-openshift-origin

6, on master node, install master packages, and setup master configuration

yum install -y origin-master origin-pod origin-sdn-ovs origin-dockerregistry

vim /etc/origin/master/master-config.yaml,
append following contents at corsAllowedOrigins section,

- kubernetes.default
- kubernetes.default.svc.cluster.local
- kubernetes
- openshift.default
- openshift.default.svc
- openshift.default.svc.cluster.local
- kubernetes.default.svc
- openshift

set scheduler config file:

schedulerConfigFile: "/etc/origin/master/scheduler.json"

set network plugin

networkPluginName: "redhat/openshift-ovs-multitenant"

set subdomain


create scheduler.json file:
vim /etc/origin/master/scheduler.json

    "apiVersion": "v1",
    "kind": "Policy",
    "predicates": [
            "name": "MatchNodeSelector"
            "name": "PodFitsResources"
            "name": "PodFitsPorts"
            "name": "NoDiskConflict"
            "name": "NoVolumeZoneConflict"
            "name": "MaxEBSVolumeCount"
            "name": "MaxGCEPDVolumeCount"
            "argument": {
                "serviceAffinity": {
                    "labels": [
            "name": "Region"
    "priorities": [
            "name": "LeastRequestedPriority",
            "weight": 1
            "name": "SelectorSpreadPriority",
            "weight": 1
            "argument": {
                "serviceAntiAffinity": {
                    "label": "zone"
            "name": "Zone",
            "weight": 2

save and exit.
enable and start origin-master service

systemctl enable origin-master
systemctl start origin-master

create kube config file,

mkdir .kube
ln -s /etc/origin/master/admin.kubeconfig .kube/config

if you can login with system admin, it works.
oc login -u system:admin

we do not put any pod in master node, so there is no need to start docker and iptables services on master node.
7, on all nodes except master, install node packages

yum install -y origin-node origin-pod origin-sdn-ovs origin-dockerregistry

8, on master node, generate node configuration files
create node config directory

mkdir /etc/origin/
mkdir /etc/origin/

ln -s /etc/origin/ openshift.local.config

create node config file

oc adm create-node-config --node-dir='/etc/origin/' --dns-domain='' --dns-ip='' --hostnames='' --master='' --network-plugin='redhat/openshift-ovs-multitenant' --node=''

oc adm create-node-config --node-dir='/etc/origin/' --dns-domain='' --dns-ip='' --hostnames='' --master='' --network-plugin='redhat/openshift-ovs-multitenant' --node=''

copy config file to nodes

scp /etc/origin/* [email protected]:/etc/origin/node/
scp /etc/origin/* [email protected]:/etc/origin/node/

9, setup node, on all nodes except master
vim /etc/origin/node/node-config.yaml, add following contents after kind section

  - region=primary
  - zone=west

copy openshift root certificates to system trusted file list

copy contents of /etc/origin/node/ca.crt
and append it to the end of /etc/ssl/certs/ca-bundle.crt

add to /etc/resolv.conf


enable and start iptables, docker, node services

systemctl enable iptables
systemctl start iptables

systemctl enable docker
systemctl start docker

systemctl enable origin-node
systemctl start origin-node

10, pull docker images needed, on all nodes except master

docker pull openshift/origin-sti-builder      
docker pull openshift/origin-deployer          
docker pull openshift/origin-docker-registry   
docker pull openshift/origin-haproxy-router   
docker pull openshift/origin-pod 

11, setup registry service, on master node
create registry serviceaccount

oc create serviceaccount registry -n default

add scc to registry serviceaccount

oadm policy add-scc-to-user privileged system:serviceaccount:default:registry

create registry service

oadm registry --service-account=registry --mount-host=/opt/openshift-registry

ignore any errors occured
create route for docker-registry service

oc create route passthrough --service docker-registry -n default 

get service ip and route name

oc get svc
oc get route

use docker-registry service cluster ip and route to create certificate

oc adm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt  --signer-key=/etc/origin/master/ca.key --signer-serial=/etc/origin/master/ca.serial.txt --hostnames="," --cert=/etc/origin/master/registry.crt --key=/etc/origin/master/registry.key

create registry-certificates secrets

oc secrets new registry-certificates /etc/origin/master/registry.crt /etc/origin/master/registry.key -n default

add this certificates to registry and default serviceaccount

oc secrets add registry registry-certificates -n default
oc secrets add default registry-certificates -n default

update deployment config of docker-registry to use ssl

oc env dc/docker-registry REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key -n default
oc patch dc/docker-registry --api-version=v1 -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","livenessProbe":{"httpGet":{"scheme":"HTTPS"}}}]}}}}'  -n default
oc patch dc/docker-registry --api-version=v1 -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","readinessProbe":{"httpGet":{"scheme":"HTTPS"}}}]}}}}'  -n default
oc volume dc/docker-registry --add --type=secret --secret-name=registry-certificates -m /etc/secrets -n default

12, create route service, on master node

oc create serviceaccount router -n default
oadm policy add-scc-to-user hostnetwork system:serviceaccount:default:router
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router
oadm router router --replicas=1 --service-account=router

13, on all nodes except master, open ports needed

iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 10250 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p udp -m udp --dport 10250 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 10255 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 80 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp --dport 443 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p udp -m udp --dport 4789 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p udp -m udp --dport 10255 -j ACCEPT

14, create image streams, on master node
download and exact the temp.tar file, cd v1.3/

for f in image-streams/image-streams-centos7.json; do cat $f | oc create -n openshift -f -; done
for f in db-templates/*.json; do cat $f | oc create -n openshift -f -; done
for f in quickstart-templates/*.json; do cat $f | oc create -n openshift -f -; done

15, confirm the installation is okay.
on master node,

oc get po -o wide
[[email protected] ~]# oc get po -o wide
NAME                      READY     STATUS    RESTARTS   AGE       IP             NODE
docker-registry-2-6ifmx   1/1       Running   0          16m
router-1-5u540            1/1       Running   0          33s
[[email protected] ~]# 
[[email protected] ~]# oc status
In project default on server (passthrough) to pod port 5000-tcp (svc/docker-registry)
  dc/docker-registry deploys 
    deployment #2 deployed 35 minutes ago - 1 pod
    deployment #1 failed 52 minutes ago: newer deployment was found running

svc/kubernetes - ports 443, 53->8053, 53->8053

svc/router - ports 80, 443, 1936
  dc/router deploys 
    deployment #1 deployed 19 minutes ago - 1 pod

View details with 'oc describe /' or list everything with 'oc get all'.

16, last, one thing to be done.
when i create docker-registry service, i use a host path /opt/openshift-registry to store docker images if you did as me, change the ownershipe of /opt/openshift-registry directory to 1001:root on the node of docker-registry pod exists. for my instance, my docker registry is running on node1, so on node1, run

chown 1001:root /opt/openshift-registry

if you use other storage strategy, no need to do this.


  1. Hi Shaun,

    thanks for your great articles!
    Could you please send me the password for multi-master part 2 & 3.


Leave a Reply

Your email address will not be published. Required fields are marked *