OpenShift Host

OPENSHIFT HOST

Konfiguration von Virtuellen Maschinen mit OpenShift Service auf dem KVM Host. (Notizen der Umsetzung, keine Tutorial.)

Network Configuration

File: /etc/sysconfig/network-scripts/ifcfg-br0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
DEVICE=br0
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=dhcp
PEERDNS=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
DHCPV6C=no
STP=on
DEFROUTE=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
DNS1=192.168.0.103
DNS2=192.168.0.103

File: /etc/sysconfig/network-scripts/ifcfg-enp6s0

1
2
3
DEVICE=enp6s0
ONBOOT=yes
BRIDGE=br0

https://access.redhat.com/solutions/7412
PEERDNS=no -> to ensure that the DHCP-Server DNS Value is not added to /etc/resolv.conf.

1
reboot

File: /etc/resolv.conf

1
2
# ...
nameserver 192.168.0.103

Htop

1
2
3
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
rpm -ihv epel-release-7-8.noarch.rpm
yum install htop

Date and Time

1
2
3
yum install -y ntp ntpdate
systemctl enable ntpd
systemctl start ntpd

OpenShift Service (Docker Container)

Dokumentation: https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container

The following command does,

  • starts OpenShift Origin listening on all interfaces on your host (0.0.0.0:8443),
  • starts the web console listening on all interfaces at /console (0.0.0.0:8443),
  • launches an etcd server to store persistent data, and
  • launches the Kubernetes system components.
1
2
3
4
5
6
7
8
9
10
11
12
13
docker \
run -d \
--name "origin" \
--privileged \
--pid=host \
--net=host \
-v /:/rootfs:ro \
-v /var/run:/var/run:rw \
-v /sys:/sys \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes:rslave \
-v /var/log:/var/log \
openshift/origin start
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
firewall-cmd \
--zone=public \
--add-port=8443/tcp \
--permanent
firewall-cmd \
--zone=public \
--add-port=7001/tcp \
--permanent
firewall-cmd \
--zone=public \
--add-port=53/tcp \
--permanent
firewall-cmd \
--zone=trusted \
--change-interface=docker0
firewall-cmd \
--reload
1
2
3
4
5
6
docker \
login \
-u <username> \
-e <any_email_address> \
-p <token_value> \
<registry_service_host:port>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker \
exec \
-it origin \
bash
oc \
login
oc \
new-project hochguertel-biz
oc \
new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
oc \
project hochguertel-biz

Get Access Token:

1
2
3
oc \
whoami \
-t

Log:

1
GOicBd6Udbl-M1qY1dOk-rPJvm-tMmceI0LiWc8XXXX

NEW APPROACH

Dokumentation:

Creating VMs

Installation of VMs using the virt-install tool is very straight-forward. This tool can run in interactive or
non-interactive mode. Let’s use virt-install in non-interactive mode to create a RHEL 7 x64 VM named
vm1 with one virtual CPU, 1 GB memory and 20 GB of disk space:

1
2
3
4
5
6
7
8
9
10
11
12
13
virt-install \
--network bridge:br0 \
--name vm1 \
--ram=2048 \
--vcpus=2 \
--disk path=/vm-images/vm1.img,size=20 \
--os-type linux \
--os-variant rhel7 \
--accelerate \
--hvm \
--noautoconsole \
--graphics vnc,listen=0.0.0.0 \
--cdrom /media/storage/installation_media/rhel-server-7.2-x86_64-dvd.iso

Enable VNC Screening

1
2
3
4
5
6
7
firewall-cmd \
--zone=public \
--add-port=5900/tcp \
--permanent
firewall-cmd \
--reload

Clone VMs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
virsh \
suspend vm1
virt-clone \
--connect qemu:///system \
--original vm1 \
--name vm1-clone \
--file /vm-images/vm1-clone.img
virsh \
resume vm1
virsh \
start vm1-clone

Delete VMs

1
2
virsh \
undefine vm1

Find-out the VNC-Display Port:

1
2
virsh \
vncdisplay vm2-clone
1
2
3
virsh \
dumpxml vm2-clone \
| grep vnc

Zwistand der VM sichern -> Clone VMs (Script: virsh-save-vm)

File: /usr/local/bin/virsh-save-vm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
#!/bin/bash
howto()
{
echo "
Saves the current state of an running Virtual-Machine.
Usage : $0 [ -n <virtual-machine name> ] [ -h ]
<virtual-machine name> Selects the Virtual-Machine which is to save.
... <virtual-machine name> has to be more than 1 character.
"
}
VM=
DATETIME=$(date +%d.%m.%Y-time-%H-%M)
SAVE_NAME=
if [[ "$1" == "" ]]; then
howto
exit
fi
#getopts, config
while getopts "n:h" OPTION
do
case $OPTION in
n)
VM=$OPTARG
;;
h)
howto
exit
;;
?)
howto
exit
;;
esac
done
if [[ "${VM}" == "" ]]; then
howto
exit
fi
if [[ "${#VM}" -le "1" ]]; then
howto
exit
fi
CHECK_IS_VM_PRESENT=$(virsh domid "${VM}")
CHECK_IS_VM_RUNNING=$(virsh list | grep -w "${VM}" | grep "laufend")
if [[ "${#CHECK_IS_VM_PRESENT}" -eq "0" ]]; then
#echo "Fehler: Die Virtuelle Maschine '${VM}' wurde nicht gefunden."
exit
fi
if [[ "${CHECK_IS_VM_PRESENT}" == "-" ]]; then
echo "Fehler: Die Virtuelle Maschine '${VM}' ist nicht gestartet."
exit
fi
if [[ "${#CHECK_IS_VM_RUNNING}" -eq "0" ]]; then
echo "Fehler: Die Virtuelle Maschine '${VM}' ist nicht gestartet."
exit
fi
#---
SAVE_NAME="${VM}-save-${DATETIME}"
# vm1-clone-add-001.img
SAVE_NAME_DISK_VDB="${VM}-add-001-save-${DATETIME}"
# TODO:
# - Check for Mutliple Disks, then create per disk and --file row.
# (https://linux.die.net/man/1/virt-clone);
# - Additional disk file names: vm1-clone-add-001.img
#
virsh suspend "${VM}"
virt-clone \
--connect qemu:///system \
--original "${VM}" \
--name "${SAVE_NAME}" \
--file "/vm-images/${SAVE_NAME}.img" \
--file "/vm-images/${SAVE_NAME_DISK_VDB}.img"
virsh resume "${VM}"
virsh list
ls -lha "/vm-images/${SAVE_NAME}.img"
ls -lha "/vm-images/${SAVE_NAME_DISK_VDB}.img"
echo " "
df -h /vm-images/
#echo " "
#virsh list --all

Usage

1
2
3
4
5
6
Saves the current state of an running Virtual-Machine.
Usage : virsh-save-vm [ -n <virtual-machine name> ] [ -h ]
<virtual-machine name> Selects the Virtual-Machine which is to save.
... <virtual-machine name> has to be more than 1 character.

Example

1
2
virsh-save-vm \
-n vm1-clone

ERSTELLEN EINES PRIVATEN NETWERKES FÜR DIE GUEST-VM’S (VM’S MIT OPENSHIFT SERVICES) (openshift.hochguertel.local)

1
2
brctl \
addbr openshift-vms
1
2
3
4
5
6
brctl \
show
ip \
link \
show openshift-vms

Create a tap device named vm-vnic:

1
2
3
4
5
6
7
8
9
ip \
tuntap \
add \
dev vm-vnic \
mode tap
ip \
link \
show vm-vnic

Let’s add vm-vnic to openshift-vms:

1
2
3
4
5
6
brctl \
addif openshift-vms \
vm-vnic
brctl \
show
1
2
brctl \
showmacs openshift-vms

Remove vm-vnic and openshift-vms:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
brctl \
delif openshift-vms \
vm-vnic
brctl \
show openshift-vms
ip \
tuntap \
del \
dev vm-vnic \
mode tap
brctl \
delbr \
openshift-vms; echo $?

Create Virtual Isolated Network for OpenShift VM’s:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
echo \
"<network>" \
> openshift-vm-isolated.xml
echo \
"<name>openshift-vm-isolated</name>" \
>> openshift-vm-isolated.xml
echo \
"</network>" \
>> openshift-vm-isolated.xml
virsh \
net-define openshift-vm-isolated.xml
virsh \
net-list --all
virsh \
net-dumpxml openshift-vm-isolated
virsh \
net-start openshift-vm-isolated
virsh \
net-autostart openshift-vm-isolated
virsh \net-list \
--all

Add Second Network Interace to OpenShift VM’s:

1
2
3
4
5
virsh \
domiflist vm1-clone
virsh \
domiflist vm2-clone
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
virsh \
attach-interface \
--domain vm1-clone \
--source openshift-vm-isolated \
--type network \
--model virtio \
--config \
--live
virsh \
domiflist vm1-clone
virsh \
attach-interface \
--domain vm2-clone \
--source openshift-vm-isolated \
--type network \
--model virtio \
--config \
--live
virsh \
domiflist vm2-clone
1
2
3
4
5
6
virsh \
net-dumpxml openshift-vm-isolated \
| grep "name="
brctl \
show virbr4

Set IP Address on vm1.openshift.hochguertel.local

1
ifconfig eth1 172.47.0.1

Set IP Address on vm2.openshift.hochguertel.local

1
ifconfig eth1 172.47.0.2

Test the new created isolated VM network openshift-vm-isolated

From: vm1.openshift.hochguertel.local

1
ping 172.47.0.1

From: vm2.openshift.hochguertel.local

1
ping 172.47.0.1

VIRTUAL-MACHINE (vm1-clone)

Wir richten die Virtuellen Machine vm1-clone als OpenShift-Origin Service ein.

  • Wir registrieren den neue Host vm1.openshift.hochguertel.local im Netzwerk.
  • Installieren den OpenShift-Origin Service Software auf dem Host vm1.openshift.hochguertel.local.
  • Konfigurieren den OpenShift-Origin Service so das er mit unserer Docker-Registrierung zusammenarbeitet.
  • Abschließend zeigen wir, wie wir ein paar Beispiel Anwendungen als Container auf dem OpenShift Service ausführen.
1
2
3
ssh \
-l root \
192.168.0.104

Wir registrieren den neue Host vm1.openshift.hochguertel.local im Netzwerk.

  • Setzen des Hostnamen’s der Virtuellen Maschine.
  • Wir registrieren den Hostnamen beim DNS Server des Netzwerks.
  • Überprüfen der DNS Registrierung ob die DNS Einträge korrekt aufgelöst werden.

Setzen des Hostnamen’s der Virtuellen Maschine vm1-clone.

1
echo "vm1.openshift.hochguertel.local" > /etc/hostname

Wir registrieren den Hostnamen beim DNS Server (atomic.hochguertel.local) des Netzwerks.

1
2
3
4
5
6
7
8
9
echo "host-record=vm1.openshift,192.168.0.104" > /root/sym/dnsmasq.d/enp2s0-dnsmasq.d/0host_vm1.openshift
echo "host-record=vm1.openshift.hochguertel.local,192.168.0.104" >> /root/sym/dnsmasq.d/enp2s0-dnsmasq.d/0host_vm1.openshift
systemctl restart enp2s0-dnsmasq.service
echo "host-record=vm1.openshift,192.168.0.104" > /root/sym/dnsmasq.d/docker0-dnsmasq.d/0host_vm1.openshift
echo "host-record=vm1.openshift.hochguertel.local,192.168.0.104" >> /root/sym/dnsmasq.d/docker0-dnsmasq.d/0host_vm1.openshift
systemctl restart docker0-dnsmasq.service
systemctl status docker0-dnsmasq.service enp2s0-dnsmasq.service

Überprüfen der DNS Registrierung ob die DNS Einträge korrekt aufgelöst werden.

  • Von mehreren Netzwerk Clients aus z.B. openshift.hochguertel.local o. T430-1.hochguertel.local.
1
2
ping vm1.openshift
ping vm1.openshift.hochguertel.local

Wir installieren den OpenShift-Origin Service.

Subscription Management

1
2
3
subscription-manager register
subscription-manager attach
subscription-manager refresh
1
yum install nano

Network Configuration

File: /etc/sysconfig/network-scripts/ifcfg-eth0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
UUID=15192a7a-6968-44de-b35f-aad885e1ae7e
DEVICE=eth0
ONBOOT=yes
DNS1=192.168.0.103
DNS2=192.168.0.103

PEERDNS=no -> to ensure that the DHCP-Server DNS Value is not added to /etc/resolv.conf. RedHat.com - Solution #7412

1
reboot

File: /etc/resolv.conf

1
2
# ...
nameserver 192.168.0.103

Ausgangs Situation:

1
2
3
# Generated by NetworkManager
search hochguertel.local openshift.hochguertel.local
nameserver 192.168.0.1

Wir installieren die OpenShift-Origin Service Software.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms"
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
yum update
yum install docker
sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --insecure-registry 172.30.0.0/16"' /etc/sysconfig/docker
systemctl is-active docker
systemctl enable docker && systemctl start docker && systemctl status docker -l
systemctl is-active docker
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=trusted --change-interface=docker0
firewall-cmd --reload
cd /usr/src/
wget https://github.com/openshift/origin/releases/download/v1.3.2/openshift-origin-server-v1.3.2-ac1d579-linux-64bit.tar.gz
tar xfvz openshift-origin-server-v1.3.2-ac1d579-linux-64bit.tar.gz
cd openshift-origin-server-v1.3.2-ac1d579-linux-64bit
export PATH="$(pwd)":$PATH
export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig
export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt
chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig
openshift start

Wir registrieren für den OpenShift Service eine Wildcard-Subdomaine Wildcard.Hostnamen beim DNS Server des Netzwerks atomic.hochguertel.local.

1
2
3
4
5
6
7
8
9
echo "address=/vm1.openshift/192.168.0.104" > /root/sym/dnsmasq.d/enp2s0-dnsmasq.d/0host_wildcard.vm1.openshift
echo "address=/vm1.openshift.hochguertel.local/192.168.0.104" >> /root/sym/dnsmasq.d/enp2s0-dnsmasq.d/0host_wildcard.vm1.openshift
systemctl restart enp2s0-dnsmasq.service; systemctl status enp2s0-dnsmasq.service
echo "address=/vm1.openshift/192.168.0.104" > /root/sym/dnsmasq.d/docker0-dnsmasq.d/0host_wildcard.vm1.openshift
echo "address=/vm1.openshift.hochguertel.local/192.168.0.104" >> /root/sym/dnsmasq.d/docker0-dnsmasq.d/0host_wildcard.vm1.openshift
systemctl restart docker0-dnsmasq.service; systemctl status docker0-dnsmasq.service
systemctl status docker0-dnsmasq.service enp2s0-dnsmasq.service

Openshift-Origin Router einrichten

to create an admin user for management

1
2
3
oadm \
policy \
add-cluster-role-to-user cluster-admin admin

1
2
3
4
5
6
7
8
9
oadm \
policy \
add-scc-to-user hostnetwork -z router
oadm \
router \
router \
--replicas=1 \
--service-account=router

Verify

1
2
oc \
status

Log:

1
2
3
4
5
6
7
8
9
10
[root@openshift1 ~]# oc status
In project default on server https://192.168.0.104:8443
svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053
svc/router - 172.30.32.87 ports 80, 443, 1936
dc/router deploys docker.io/openshift/origin-haproxy-router:v1.3.2
deployment #1 running for 20 seconds - 1 pod
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

1
2
3
4
5
6
7
8
9
10
11
[root@openshift1 ~]# oadm policy add-cluster-role-to-user cluster-admin admin
[root@openshift1 ~]# oadm policy add-scc-to-user hostnetwork -z router
[root@openshift1 ~]# oadm router router --replicas=1 --service-account=router
info: password for stats user admin has been set to uudBGre8uF
--> Creating router router ...
serviceaccount "router" created
clusterrolebinding "router-router-role" created
deploymentconfig "router" created
service "router" created
--> Success
[root@openshift1 ~]#
1
2
3
4
5
6
7
8
9
10
11
[root@openshift1 ~]# oc status
In project default on server https://openshift1.kvm.hochguertel.local:8443
svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053
svc/router - 172.30.34.251 ports 80, 443, 1936
dc/router deploys docker.io/openshift/origin-haproxy-router:v1.3.2
deployment #1 deployed 53 seconds ago - 1 pod
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
[root@openshift1 ~]#

Zweite Festplatte anlegen openshift.hochguertel.local

Eine zweite Fesplatte der VM: vm1.openshift.hochguertel.local hinzufügen, mit 50GB Speicherplatz.

Create a 50-GB non-sparse file:

1
dd if=/dev/zero of=/vm-images/vm1-clone-add-001.img bs=1M count=51200

Shutdown the VM:

1
2
virsh \
shutdown vm1-clone

Zweite Festplatte hinzufügen

Add an extra entry for disk in the VM’s XML (vm1-clone.xml) file in /etc/libvirt/qemu.
You can look copy & paste the entry for your mail storage device and just change the target and address tags.
For example:

1
2
virsh \
edit vm1-clone

Examples:

1
2
3
4
5
6
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>

Make sure that:

  • the name of the device (i.e. vdb) follows the first one in sequential order in the address tag, -> dev=’vda’
  • use a unique slot address (check the address tag of ALL devices, not just storage devices) -> slot=’0x04’

Or:

1
2
3
4
5
6
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/vm-images/vm1-clone.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>

make sure that:

  • the name of the device (i.e. vdb) follows the first one in sequential order in the address tag, -> dev=’vda’
  • use a unique slot address (check the address tag of ALL devices, not just storage devices) -> slot=’0x06’

Add:

1
2
3
4
5
6
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1-clone-add-001.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>

make sure that:

  • the name of the device (i.e. vdb) follows the first one in sequential order in the address tag, -> dev=’vdb’
  • use a unique slot address (check the address tag of ALL devices, not just storage devices) -> slot=’0x08’

Full Content of Qemue Guest-VM XML-File with second harddisk added:

File: /etc/libvirt/qemu/vm1-clone.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
<domain type='kvm'>
<name>vm1-clone</name>
<uuid>f199b841-1b6d-4ed9-9208-e758b93700c4</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Nehalem</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/vm-images/vm1-clone.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1-clone-add-001.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:04:3e:1f'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
</domain>

Restart the VM from the updated XML configuration file:

1
2
virsh \
create /etc/libvirt/qemu/vm1-clone.xml

Docker Storage

File: /etc/sysconfig/docker-storage-setup

1
2
DEVS=/dev/vdb
VG=docker-vg

1
2
3
4
5
6
7
8
9
10
11
12
systemctl \
stop docker
rm -rf /var/lib/docker
docker-storage-setup
systemctl \
start docker
&& \
systemctl \
status docker

Konfigurieren der Zusammenarbeit von Openshift mit unserer Nexus3 Docker-Registrierung.

Auf der vm1-clone wird der Docker Service konfiguriert um mit unserer privaten (Nexus3) Docker-Registrierung zusammenarbeiten kann.

  • Hinzufügen der Docker Registrierung

    1
    2
    ADD_REGISTRY=' --add-registry registry.hochguertel.local '
    INSECURE_REGISTRY=' --insecure-registry registry.hochguertel.local --insecure-registry internal-registry.hochguertel.local --insecure-registry 172.30.0.0/16 '
  • Neustarten des Docker Dienstes

    1
    2
    3
    systemctl \
    restart \
    docker
  • Status des Docker Dienstes abrufen

    1
    2
    3
    systemctl \
    status \
    docker
  • Anmelden an der Registrierung

    1
    2
    3
    4
    docker login registry.hochguertel.local
    -> admin
    -> <SECRET>
    -> e-Mail sollte nicht gefragt werden!
  • Testen der privaten Registrierung

    1
    2
    3
    docker \
    search \
    registry.hochguertel.local/hochguertel-local/openshift/springboot-maven3-centos7

File: /etc/sysconfig/docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
#OPTIONS='--selinux-enabled --log-driver=journald'
OPTIONS=' --log-driver=journald '
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
# If you want to add your own registry to be used for docker search and docker
# pull use the ADD_REGISTRY option to list a set of registries, each prepended
# with --add-registry flag. The first registry added will be the first registry
# searched.
ADD_REGISTRY=' --add-registry registry.hochguertel.local --add-registry registry.access.redhat.com '
# If you want to block registries from being used, uncomment the BLOCK_REGISTRY
# option and give it a set of registries, each prepended with --block-registry
# flag. For example adding docker.io will stop users from downloading images
# from docker.io
#BLOCK_REGISTRY=' --block-registry=all '
# If you have a registry secured with https but do not have proper certs
# distributed, you can tell docker to not look for full authorization by
# adding the registry to the INSECURE_REGISTRY line and uncommenting it.
# INSECURE_REGISTRY='--insecure-registry'
INSECURE_REGISTRY=' --insecure-registry registry.hochguertel.local --insecure-registry internal-registry.hochguertel.local --insecure-registry 172.30.0.0/16 '
# On an SELinux system, if you remove the --selinux-enabled option, you
# also need to turn on the docker_transition_unconfined boolean.
# setsebool -P docker_transition_unconfined 1
# Location used for temporary files, such as those created by
# docker load and build operations. Default is /var/lib/docker/tmp
# Can be overriden by setting the following environment variable.
# DOCKER_TMPDIR=/var/tmp
# Controls the /etc/cron.daily/docker-logrotate cron job status.
# To disable, uncomment the line below.
# LOGROTATE=false
#
# docker-latest daemon can be used by starting the docker-latest unitfile.
# To use docker-latest client, uncomment below line
#DOCKERBINARY=/usr/bin/docker-latest

Einrichten der OpenShift Internal Docker-Registry

Der OpenShift Servie nutzt die interne Docker Registerung für den Build-Vorgang. Die Artefakte die beim Build entstehen bleiben im Build Server, können bei bedarf zu einer externen Registrierung (nexus3 o. docker.io) deployed werden.

1
2
3
4
5
6
7
8
find \
-iname \
"admin.kubeconfig"
find \
/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit \
-iname \
"admin.kubeconfig"
1
2
3
4
oadm \
registry \
--config=./openshift.local.config/master/admin.kubeconfig \
--service-account=registry

Log:

1
2
3
4
5
6
7
[root@vm1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oadm registry --config=./openshift.local.config/master/admin.kubeconfig --service-account=registry
--> Creating registry registry ...
serviceaccount "registry" created
clusterrolebinding "registry-registry-role" created
deploymentconfig "docker-registry" created
service "docker-registry" created
--> Success

Workaround: Um Openshift-Anwendungen von Container-Images aus der (externem, privaten) Nexus3 Docker-Registrierung zu erstellen:

  1. Importieren des Containers Images aus der Privaten Registrierung
  2. Erstellen der Anwendung mit dem Importierten Container Image

This are the steps I performed to use an image from another Docker registry (even outside my cluster).

My registry:

1
https://ec2-xx-xx-xx-xx.eu-central-1.compute.amazonaws.com:5000

I create the project (in OS) to where I want to push.

1
2
oc \
new-project test

I’m inside the project and I’ll create a secret so that my openshift is able to access my registry:

1
2
3
4
5
6
7
oc \
secrets \
new-dockercfg mysecret \
--docker-server=https://ec2-xx-xx-xx-xx.eu-central-1.compute.amazonaws.com:5000 \
--docker-username=testuser \
--docker-password=testpassword \
--docker-email=any@mail.com

Add secret to service accounts

1
2
3
4
5
6
7
8
9
10
oc \
secrets \
add serviceaccount/default \
secrets/mysecret \
--for=pull
oc \
secrets \
add serviceaccount/builder \
secrets/mysecret

import image stream

1
2
3
oc import-image \
--insecure ec2-xx-xx-xx-xx.eu-central-1.compute.amazonaws.com:5000/test/name-of-image:1 \
--confirm

Now you’re able to create a

1
2
oc new-app \
--insecure-registry <image-stream-name>:tag
  • A better way is to push your images to the OpenShift registry.
  • Than it isn’t necessary to create a secret and to perform the oc import.
  • You’re able to expose a registry (secure registry) so you can access the registry from outside your cluster to push images.

OpenShift mit dem System starten:

File: /etc/profile.d/openshift.sh

1
2
3
4
5
6
7
cat > /etc/profile.d/openshift.sh << '__EOF__'
export OPENSHIFT=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit
export PATH=$OPENSHIFT:$PATH
export KUBECONFIG="$OPENSHIFT"/openshift.local.config/master/admin.kubeconfig
export CURL_CA_BUNDLE="$OPENSHIFT"/openshift.local.config/master/ca.crt
sudo chmod +r "$OPENSHIFT"/openshift.local.config/master/admin.kubeconfig
__EOF__

1
chmod 755 /etc/profile.d/openshift.sh

File: /usr/lib/systemd/system/openshift‐origin.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat > /usr/lib/systemd/system/openshift‐origin.service << '__EOF__'
[Unit]
Description=Origin Master Service
After=docker.service
Wants=docker.service
Requires=docker.service
[Service]
Type=notify
NotifyAccess=all
Restart=always
RestartSec=10s
ExecStart=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/openshift start
WorkingDirectory=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/
[Install]
WantedBy=multi-user.target
__EOF__

Abschließend ein paar Beispiele Anwendung auf dem OpenShift Service ausrollen.

Add Image-Stream Tag: demo-hochguertel-biz/openshift-springboot-maven3-centos7:latest

  • import docker image from external private registry
1
2
3
4
5
6
oc \
tag \
--insecure=true \
--source=docker \
registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest \
demo-hochguertel-biz/openshift-springboot-maven3-centos7:latest

Log:

1
2
[root@ostree docker-springboot-maven3-centos]# oc tag --insecure=true --source=docker registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest demo-hochguertel-biz/openshift-springboot-maven3-centos7:latest
Tag openshift-springboot-maven3-centos7:latest set to registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest.

Spring Boot Anwendung als Container Anwendung mittels des S2I-fähigen Container-Images erstellen

1
2
3
4
5
s2i \
build \
git://git@gogs.hochguertel.local:hochguertelto/openshift-demo-boot-project.git \
registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest \
demo-boot-project
  • Docker Container aus einer privaten docker-registrierung in die OpenShift Docker-Image registrierung importieren. (* ist nötig,…)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    oc \
    secrets \
    new-dockercfg mysecret \
    --docker-server=https://registry.hochguertel.local \
    --docker-username=admin \
    --docker-password=<SECRET> \
    --docker-email=any@mail.com
    oc \
    secrets \
    add serviceaccount/default \
    secrets/mysecret \
    --for=pull
    oc \
    secrets \
    add serviceaccount/builder \
    secrets/mysecret
1
2
3
4
5
oc \
import-image \
--insecure=true
registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest \
--confirm=true
  • Spring Boot Anwendung als Container Anwendung mittels des S2I-fähigen Container-Images als OpenShift Projke erstellen
    1
    2
    3
    oc \
    new-app \
    registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest~http://gogs.hochguertel.local/hochguertelto/openshift-demo-boot-project.git
1
2
3
oc \
new-app \
registry.hochguertel.local/hochguertel-local/openshift-springboot-maven3-centos7:latest~http://gogs.hochguertel.local/hochguertelto/learning-spring-boot-video.git
  • Delete Build
1
2
3
4
oc \
delete \
bc openshift-demo-boot-project \
-n demo-hochguertel-biz

VIRTUAL-MACHINE (vm2-clone)

Wir richten die Virtuellen Machine vm2-clone als OpenShift-Origin Service ein, nutzen die vm1-clone als Vorlage.

  • Erstellen eines Kopie von vm1-clone.
  • Geben der Kopie den neuen Namen vm2-clome.
  • Wir registrieren den neue Host vm2.openshift.hochguertel.local im Netzwerk.
1
2
virsh-save-vm \
-n vm1-clone
1
2
3
4
virsh \
dumpxml \
vm1-clone-save-25.12.2016-time-03-39 \
> vm2-clone.xml
1
nano vm2-clone.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
mv \
/vm-images/vm1-clone-save-25.12.2016-time-03-39.img
/vm-images/vm2-clone.img
mv \
/vm-images/vm1-clone-add-001-save-25.12.2016-time-03-39.img
/vm-images/vm2-clone-add-001.img
virsh \
undefine vm1-clone-save-25.12.2016-time-03-39
virsh \
define vm2-clone.xml
virsh \
list \
--all
1
2
3
4
5
virsh \
start vm2-clone
virsh \
list

Full Content of my Qemu VM-XML-File:

Copy of vm1-clone.xml, modified to renamed as vm2-clone.xml (VM-Name: vm2-clone)

File: /root/vm2-clone.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
<domain type='kvm'>
<name>vm2-clone</name>
<uuid>6ac6bb29-8e77-4787-846d-2f6148d16516</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Nehalem</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/vm-images/vm2-clone.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm2-clone-add-001.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:e8:1d:b8'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'/>
<seclabel type='dynamic' model='dac' relabel='yes'/>
</domain>

Check VNC Display Port

1
2
virsh \
vncdisplay vm2-clone
1
2
3
virsh \
dumpxml vm2-clone \
| grep vnc

Make VNC-Display port accesible through the firewall

1
2
3
4
5
6
7
firewall-cmd \
--zone=public \
--add-port=5901/tcp \
--permanent
firewall-cmd \
--reload

Wir registrieren den neue Host vm2.openshift.hochguertel.local im Netzwerk und installieren die OpenShift Origin Service.

  • Setzen des Hostnamen’s der Virtuellen Maschine.
  • Wir registrieren den Hostnamen beim DNS Server des Netzwerks.
  • Überprüfen der DNS Registrierung ob die DNS Einträge korrekt aufgelöst werden.

Setzen des Hostnamen’s der Virtuellen Maschine vm2-clone.

1
echo "vm2.openshift.hochguertel.local" > /etc/hostname

Wir registrieren den Hostnamen beim DNS Server (atomic.hochguertel.local) des Netzwerks.

1
2
3
4
5
6
7
8
9
echo "host-record=vm2.openshift,192.168.0.108" > /root/sym/dnsmasq.d/enp2s0-dnsmasq.d/0host_vm2.openshift
echo "host-record=vm2.openshift.hochguertel.local,192.168.0.108" >> /root/sym/dnsmasq.d/enp2s0-dnsmasq.d/0host_vm2.openshift
systemctl restart enp2s0-dnsmasq.service
echo "host-record=vm2.openshift,192.168.0.108" > /root/sym/dnsmasq.d/docker0-dnsmasq.d/0host_vm2.openshift
echo "host-record=vm2.openshift.hochguertel.local,192.168.0.108" >> /root/sym/dnsmasq.d/docker0-dnsmasq.d/0host_vm2.openshift
systemctl restart docker0-dnsmasq.service
systemctl status docker0-dnsmasq.service enp2s0-dnsmasq.service

Überprüfen der DNS Registrierung ob die DNS Einträge korrekt aufgelöst werden.

  • Von mehreren Netzwerk Clients aus z.B. openshift.hochguertel.local o. T430-1.hochguertel.local.
1
2
ping vm2.openshift
ping vm2.openshift.hochguertel.local

Neustarten der VM2… vm2.openshift.hochguertel.local

1
reboot

Subscription Management vm2.openshift.hochguertel.local

Subscription Management

1
2
3
subscription-manager register
subscription-manager attach
subscription-manager refresh
1
yum install nano

NetworkManager deaktiviert.

Deaktivieren des NetworkManager, dieser hat sich vermutlich nach einem yum update wieder aktiviert hat?!

Important! NetworkManager does not support bridging. NetworkManager must be disabled to use networking with the network scripts (located in the /etc/sysconfig/network-scripts/ directory).

1
2
3
4
chkconfig NetworkManager off
chkconfig network on
service NetworkManager stop
service network start

If you do not want to disable NetworkManager entirely, add NM_CONTROLLED=no to the ifcfg-* network script being used for the bridge.

NM_CONTROLLED= where answer is one of the following:

  • yes — NetworkManager is permitted to configure this device. This is the default behavior and can be omitted.
  • no — NetworkManager is not permitted to configure this device.

LINKDELAY=time where time is the number of seconds to wait for link negotiation before configuring the device. The default is 5 secs. Delays in link negotiation, caused by STP for example, can be overcome by increasing this value.

The bridge should now be ready for use, however there may be a delay before traffic starts to flow (typically about 30 seconds if STP is enabled or half that if not). The STP option specifies whether or not the Spanning Tree Protocol should be enabled. This is essential if there is any possibility of the bridge creating a loop in the network. It is safe in other cases, but it will increase the delay between a new link being added and it being able to pass traffic. For this reason you may want to leave STP disabled in simple cases (such as when bridging a set of virtual machines to a single physical interface).

File: /etc/sysconfig/network-scripts/ifcfg-br0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
DEVICE=br0
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=dhcp
PEERDNS=yes
IPV6INIT=yes
IPV6_AUTOCONF=yes
DHCPV6C=no
STP=on
DELAY=10
#LINKDELAY=30
DEFROUTE=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NM_CONTROLLED=no

  • STP=on it is important that STP is set to on, because without STP (Spanning Tree Protocol) kvm network for guest VM’s doesn not work. So STP must be enabled!
  • With STP On, we must ensure that the Link is available - we have to wait.
  • AND DELAY is set to 10 to wait 10 seconds until this network device is available?!, otherwise no IP is received via DHCP for br0.
  • AND LINKDELAY is set to 30 to wait 30 seconds to send the DHCP Request to receive an IP, otherwise no IP is received via DHCP for br0.
  • a link-delay time (LINKDELAY) less than 30 seconds is not working, because the time is to short to get the link ready and send packages to the network.

OpenShift Rechner per Wake-On-Lan startbar:

  • Entsprechende WOL Scripte und Befehle wurden auf verschiedenen System hinterlegt.
1
2
$ WOL_OPENSHIFT
Sending magic packet to 255.255.255.255 with broadcast 255.255.255.255 MAC 00:26:18:79:B5:BE port 9

OpenShift Standalone (vm1-clone) mit Master und Node Konfiguration ausstatten.

OpneShift Service nutzte bis jetzt die Standard Konfiguration die ihm impliziert zur Verfügung steht. Mit einer ausgelagerten Konfiguration für Master und Client kann der OpenSHift Service vollständig konfiguriert werden.

  • Wir können in der Master Konfiguration den DNS Namen (Wildcard-Subdomaine) setzen, so das die UI z.b. beim erstellen einer Route automatisch die zuverwendene Sub-Domain einträgt.
  • Vermutlich lässt sich so ein Bug mit der Container Registry beheben, (dem manuellen importieren von Images.)

File: /etc/profile.d/openshift.sh

1
2
3
4
5
6
config="openshift1.kvm.hochguertel.local.config"
export OPENSHIFT=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit
export PATH=$OPENSHIFT:$PATH
export KUBECONFIG="$OPENSHIFT"/${config}/master/admin.kubeconfig
export CURL_CA_BUNDLE="$OPENSHIFT"/${config}/master/ca.crt
sudo chmod +r "$OPENSHIFT"/${config}/master/admin.kubeconfig

File: /etc/profile.d/openshift.sh

1
2
3
4
5
6
config="openshift1.kvm.hochguertel.local.config"
export OPENSHIFT=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit
export PATH=$OPENSHIFT:$PATH
export KUBECONFIG="$OPENSHIFT"/${config}/master/admin.kubeconfig
export CURL_CA_BUNDLE="$OPENSHIFT"/${config}/master/ca.crt
sudo chmod +r "$OPENSHIFT"/${config}/master/admin.kubeconfig

File: /usr/lib/systemd/system/openshift.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[Unit]
Description=Origin Master Service
After=docker.service
Wants=docker.service
Requires=docker.service
[Service]
Type=notify
NotifyAccess=all
Restart=always
RestartSec=10s
ExecStart=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/openshift \
start \
--master-config=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/%H.kvm.hochguertel.local.config/master/master-config.yaml \
--node-config=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/%H.kvm.hochguertel.local.config/node-openshift1/node-config.yaml
WorkingDirectory=/usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/
[Install]
WantedBy=multi-user.target

File: /etc/hostname

1
openshift1

NM_CONTROLLED= where answer is one of the following:

  • yes — NetworkManager is permitted to configure this device. This is the default behavior and can be omitted.
  • no — NetworkManager is not permitted to configure this device.

File: /etc/sysconfig/network-scripts/ifcfg-eth0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
UUID=15192a7a-6968-44de-b35f-aad885e1ae7e
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no

  • NM_CONTROLLED=no it is important to set NM_CONTROLLED to no, otherwise the bridge doesn’t work and guest VM can’t receive IP’s from DHCP and either a connection.

File: /etc/sysconfig/network

1
2
3
4
# Created by anaconda
# If the synchronization with the time server at boot time keeps failing, i.e., you find a relevant error message in the /var/log/boot.log system log, try to add the following line to /etc/sysconfig/network:
NETWORKWAIT=1

File: /etc/hosts

1
2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

File: /etc/nsswitch.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
#
# /etc/nsswitch.conf
#
# An example Name Service Switch config file. This file should be
# sorted with the most-used services at the beginning.
#
# The entry '[NOTFOUND=return]' means that the search for an
# entry should stop if the search in the previous entry turned
# up nothing. Note that if the search failed due to some other reason
# (like no NIS server responding) then the search continues with the
# next entry.
#
# Valid entries include:
#
# nisplus Use NIS+ (NIS version 3)
# nis Use NIS (NIS version 2), also called YP
# dns Use DNS (Domain Name Service)
# files Use the local files
# db Use the local database (.db) files
# compat Use NIS on compat mode
# hesiod Use Hesiod for user lookups
# [NOTFOUND=return] Stop searching if not found so far
#
# To use db, put the "db" in front of "files" for entries you want to be
# looked up first in the databases
#
# Example:
#passwd: db files nisplus nis
#shadow: db files nisplus nis
#group: db files nisplus nis
passwd: files sss
shadow: files sss
group: files sss
#initgroups: files
#hosts: db files nisplus nis dns
hosts: files dns myhostname
# Example - obey only what nisplus tells us...
#services: nisplus [NOTFOUND=return] files
#networks: nisplus [NOTFOUND=return] files
#protocols: nisplus [NOTFOUND=return] files
#rpc: nisplus [NOTFOUND=return] files
#ethers: nisplus [NOTFOUND=return] files
#netmasks: nisplus [NOTFOUND=return] files
bootparams: nisplus [NOTFOUND=return] files
ethers: files
netmasks: files
networks: files
protocols: files
rpc: files
services: files sss
netgroup: files sss
publickey: nisplus
automount: files
aliases: files nisplus

File: /etc/bash_completion.d/virsh_bash_completion

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
# bash completion for virsh
_contain_cmd()
{
local e f
local array1=($1) array2=($2)
for e in "${array1[@]}"
do
for f in "${array2[@]}"
do
if [[ "$e" == "$f" ]] ; then
echo $e
return
fi
done
done
echo "notfound"
return
}
_virsh_list_networks()
{
local flag_all=$1 flags
if [ "$flag_all" -eq 1 ]; then
flags="--all"
else
flags="--inactive"
fi
virsh -q net-list $flags | cut -d\ -f2 | awk '{print $1}'
}
_virsh_list_domains()
{
local flag_all=$1 flags
if [ "$flag_all" -eq 1 ]; then
flags="--all"
else
flags="--inactive"
fi
virsh -q list $flags | cut -d\ -f7 | awk '{print $1}'
}
_virsh_list_pools()
{
local flag_all=$1 flags
if [ "$flag_all" -eq 1 ]; then
flags="--all"
else
flags="--inactive"
fi
virsh -q pool-list $flags | cut -d\ -f2 | awk '{print $1}'
}
_virsh_list_ifaces()
{
local flag_all=$1 flags
if [ "$flag_all" -eq 1 ]; then
flags="--all"
else
flags="--inactive"
fi
virsh -q iface-list $flags | cut -d\ -f2 | awk '{print $1}'
}
_virsh_list_nwfilters()
{
virsh -q nwfilter-list | cut -d\ -f4 | awk '{print $1}'
}
_virsh()
{
local cur prev cmds doms options nets pools cmds_help
local flag_all=1 array ret a b ifaces nwfilters files
# not must use bash-completion now :)
# _init_completion -s || return
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
cmds=$( echo "$(virsh -h| grep '^ ' | cut -d\ -f5)" \
"$(virsh -h| grep '\--' | cut -d\ -f7 | cut -d= -f1)")
cmds_help=$(virsh help| grep '^ ' | cut -d\ -f5)
case "$prev" in
--domain)
doms=$(_virsh_list_domains "$flag_all")
COMPREPLY=( $(compgen -W "$doms" -- "$cur") )
return 0
;;
--network)
nets=$(_virsh_list_networks "$flag_all")
COMPREPLY=( $(compgen -W "$nets" -- "$cur") )
return 0
;;
--pool)
pools=$(_virsh_list_pools "$flag_all")
COMPREPLY=( $(compgen -W "$pools" -- "$cur") )
return 0
;;
--interface)
ifaces=$(_virsh_list_ifaces "$flag_all")
COMPREPLY=( $(compgen -W "$ifaces" -- "$cur") )
return 0
;;
--nwfilter)
nwfilters=$(_virsh_list_nwfilters)
COMPREPLY=( $(compgen -W "$nwfilters" -- "$cur") )
return 0
;;
--file|--xml)
files=$(ls)
COMPREPLY=( $(compgen -W "$files" -- "$cur") )
return 0
;;
esac
array=$(IFS=$'\n'; echo "${COMP_WORDS[*]}")
ret=$(_contain_cmd "$array" "$cmds_help")
if [[ "$ret" != "notfound" && "$ret" != "$cur" ]]; then
a=$(virsh help "$ret" |grep '^ --'|cut -d\ -f5)
b=$(virsh help "$ret" |grep '^ \[--'|cut -d\ -f5|cut -d[ -f2|cut -d] -f1)
options=$( echo $a $b )
COMPREPLY=( $(compgen -W "$options" -- "$cur") )
return 0
fi
case "$cur" in
*)
COMPREPLY=( $(compgen -W "$cmds" -- "$cur") )
return 0
;;
esac
} &&
complete -o default -F _virsh virsh

File: /root/.bashrc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
alias ll="ls -lh"

KVM-Gäste (Virtuelle Maschinen) können nicht mehr länger mit dem bridged-network sprechen

Auf dem HOST System, erneut die Netzwerkkarten inizialisieren, sobald alle Virtuellen Gäste resumed sind!

1
systemctl restart network

  • Host Netzwerk, funktioniert jetzt einfachwandfrei wenn man WOL_OPENSHIFT oder. vgl. tut.
  • Die KVM VM’s stehen nach einem Neustart sehr schnell zur Verfügung.
  • Ich akzeptiere das mit dem erneuten inizialisieren der Netzwerkkarten, weil es wohl augenscheinlich erstmal keine bessere Lösung gibt.

Generating a KVM MAC

If you are managing your guests via command line, the following script might be helpful to generate a randomized MAC using QEMU’s registered OUI (52:54:00):

1
2
MACADDR="52:54:00:$(dd if=/dev/urandom bs=512 count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\).*$/\1:\2:\3/')";
echo $MACADDR

If you’re paranoid about assigning an in-use MAC then check for a match in the output of “ip neigh”. However, using this random method is relatively safe,
giving you an approximately n in 16.8 million chance of a collision (where n is the number of existing QEMU/KVM guests on the LAN).

OpenShift: Change default Route prefixed Domainname

File: /usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/openshift1.kvm.hochguertel.local.config/master/master-config.yaml

1
2
3
4
...
routingConfig:
subdomain: router.openshift1.kvm.hochguertel.local
...

File: /usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/openshift1.kvm.hochguertel.local.config/master/master-config.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
admissionConfig:
pluginConfig:
openshift.io/ImagePolicy:
configuration:
apiVersion: v1
executionRules:
- matchImageAnnotations:
- key: images.openshift.io/deny-execution
value: "true"
name: execution-denied
onResources:
- resource: pods
- resource: builds
reject: true
skipOnResolutionFailure: true
kind: ImagePolicyConfig
location: ""
apiLevels:
- v1
apiVersion: v1
assetConfig:
extensionDevelopment: false
extensionProperties: null
extensionScripts: null
extensionStylesheets: null
extensions: null
loggingPublicURL: ""
logoutURL: ""
masterPublicURL: https://openshift1.kvm.hochguertel.local:8443
metricsPublicURL: ""
publicURL: https://openshift1.kvm.hochguertel.local:8443/console/
servingInfo:
bindAddress: 0.0.0.0:8443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ""
keyFile: master.server.key
maxRequestsInFlight: 0
namedCertificates: null
requestTimeoutSeconds: 0
auditConfig:
enabled: false
controllerConfig:
serviceServingCert:
signer:
certFile: service-signer.crt
keyFile: service-signer.key
controllerLeaseTTL: 0
controllers: '*'
corsAllowedOrigins:
- 127.0.0.1
- localhost
- openshift1.kvm.hochguertel.local:8443
disabledFeatures: null
dnsConfig:
allowRecursiveQueries: true
bindAddress: 0.0.0.0:8053
bindNetwork: tcp4
etcdClientInfo:
ca: ca.crt
certFile: master.etcd-client.crt
keyFile: master.etcd-client.key
urls:
- https://openshift1.kvm.hochguertel.local:4001
etcdConfig:
address: openshift1.kvm.hochguertel.local:4001
peerAddress: openshift1.kvm.hochguertel.local:7001
peerServingInfo:
bindAddress: 0.0.0.0:7001
bindNetwork: tcp4
certFile: etcd.server.crt
clientCA: ca.crt
keyFile: etcd.server.key
namedCertificates: null
servingInfo:
bindAddress: 0.0.0.0:4001
bindNetwork: tcp4
certFile: etcd.server.crt
clientCA: ca.crt
keyFile: etcd.server.key
namedCertificates: null
storageDirectory: /usr/src/openshift-origin-server-v1.3.2-ac1d579-linux-64bit/openshift.local.etcd
etcdStorageConfig:
kubernetesStoragePrefix: kubernetes.io
kubernetesStorageVersion: v1
openShiftStoragePrefix: openshift.io
openShiftStorageVersion: v1
imageConfig:
format: openshift/origin-${component}:${version}
latest: false
imagePolicyConfig:
disableScheduledImport: false
maxImagesBulkImportedPerRepository: 5
maxScheduledImageImportsPerMinute: 60
scheduledImageImportMinimumIntervalSeconds: 900
jenkinsPipelineConfig:
autoProvisionEnabled: false
parameters: null
serviceName: jenkins
templateName: jenkins-ephemeral
templateNamespace: openshift
kind: MasterConfig
kubeletClientInfo:
ca: ca.crt
certFile: master.kubelet-client.crt
keyFile: master.kubelet-client.key
port: 10250
kubernetesMasterConfig:
admissionConfig:
pluginConfig: null
apiLevels: null
apiServerArguments: null
controllerArguments: null
disabledAPIGroupVersions: {}
masterCount: 1
masterIP: ""
podEvictionTimeout: 5m
proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
schedulerArguments: null
schedulerConfigFile: ""
servicesNodePortRange: 30000-32767
servicesSubnet: 172.30.0.0/16
staticNodeNames: null
masterClients:
externalKubernetesClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 400
contentType: application/vnd.kubernetes.protobuf
qps: 200
externalKubernetesKubeConfig: ""
openshiftLoopbackClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 600
contentType: application/vnd.kubernetes.protobuf
qps: 300
openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://openshift1.kvm.hochguertel.local:8443
networkConfig:
clusterNetworkCIDR: 10.128.0.0/14
externalIPNetworkCIDRs: null
hostSubnetLength: 9
ingressIPNetworkCIDR: 172.46.0.0/16
networkPluginName: ""
serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
alwaysShowProviderSelection: false
assetPublicURL: https://openshift1.kvm.hochguertel.local:8443/console/
grantConfig:
method: auto
serviceAccountMethod: prompt
identityProviders:
- challenge: true
login: true
mappingMethod: claim
name: anypassword
provider:
apiVersion: v1
kind: AllowAllPasswordIdentityProvider
masterCA: ca-bundle.crt
masterPublicURL: https://openshift1.kvm.hochguertel.local:8443
masterURL: https://openshift1.kvm.hochguertel.local:8443
sessionConfig:
sessionMaxAgeSeconds: 300
sessionName: ssn
sessionSecretsFile: ""
templates: null
tokenConfig:
accessTokenMaxAgeSeconds: 86400
authorizeTokenMaxAgeSeconds: 300
pauseControllers: false
policyConfig:
bootstrapPolicyFile: policy.json
openshiftInfrastructureNamespace: openshift-infra
openshiftSharedResourcesNamespace: openshift
userAgentMatchingConfig:
defaultRejectionMessage: ""
deniedClients: null
requiredClients: null
projectConfig:
defaultNodeSelector: ""
projectRequestMessage: ""
projectRequestTemplate: ""
securityAllocator:
mcsAllocatorRange: s0:/2
mcsLabelsPerProject: 5
uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
subdomain: router.openshift1.kvm.hochguertel.local
serviceAccountConfig:
limitSecretReferences: false
managedNames:
- default
- builder
- deployer
masterCA: ca-bundle.crt
privateKeyFile: serviceaccounts.private.key
publicKeyFiles:
- serviceaccounts.public.key
servingInfo:
bindAddress: 0.0.0.0:8443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ca.crt
keyFile: master.server.key
maxRequestsInFlight: 500
namedCertificates: null
requestTimeoutSeconds: 3600
volumeConfig:
dynamicProvisioningEnabled: true

KVM Spice instead VNC as Guest-VM remote access

  • With VNC the keyboard language is not correctly auto-detected.
  • That’s annoying, also the previously announced trick with flipping the keyboard language (windows taskbar..)
  • not work very well, some characters are not available… like “pipe”…
1
2
3
4
5
6
7
8
9
10
echo "EDITOR=nano" \
>> ~/.bashrc
echo "VISUAL=nano" \
>> ~/.bashrc
echo "export EDITOR VISUAL" \
>> ~/.bashrc
source ~/.bashrc
1
yum install spice-server spice-client spice-protocol

Note:

1
You must remove the VNC Display and then add the Spice Display ... you can NOT change the VNC Display to a Spice Display as that does not add the extra devices necessary to use spice.

1
2
virsh \
edit openshift1
1
2
3
4
5
...
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
...
1
2
3
<graphics type='spice' autoport='yes' listen='0.0.0.0' defaultMode='insecure'>
<listen type='address' address='0.0.0.0'/>
</graphics>
1
2
3
4
5
6
...
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
...
1
2
3
4
<video>
<model type='qxl' ram='65536' vram='16384' vgamem='16384' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>

Spice Display URI abfragen:

1
virsh domdisplay --domain openshift1

LOG

1
2
3
[root@kvm repo]# virsh domdisplay --domain openshift1
spice://localhost:5900
[root@kvm repo]#

It looks like, that the spice display server is not listing on 0.0.0.0, but he does, we can connect via uri: spice://kvm:5900 to our Guest VM.

  • With Spice the keyboard language is fine. Every character is available and no flipping of windows keyboard language is needed.

Less waiting time when booting

File: /etc/sysconfig/grub

1
2
3
4
5
6
7
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel_openshift/root rd.lvm.lv=rhel_openshift/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

1
grub2-mkconfig -o /boot/grub2/grub.cfg

LOG

1
2
3
4
5
6
7
8
9
10
[root@kvm repo]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-514.2.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.2.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-71fa280cd69540d7b5a32d7123be8f55
Found initrd image: /boot/initramfs-0-rescue-71fa280cd69540d7b5a32d7123be8f55.img
done
[root@kvm repo]#

KVM Netzwerk Optimierung…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
...
<interface type='bridge'>
<mac address='52:54:00:6c:9b:f9'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:b2:04:ef'/>
<source network='openshift-vm-isolated'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:1a:cf:b8'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:f7:42:3b'/>
<source network='bridge-default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
</interface>
...
1
<!-- Einträge vollständig entfernt.. um das netzwerk nochmal neu aufzubauen! -->

File: net_bridge-default.xml

1
2
3
4
5
<network>
<name>vbridge0</name>
<uuid>0d524877-8197-478b-bfef-a0300c18d228</uuid>
<bridge name='vbridge0' stp='on' delay='0'/>
</network>

1
2
3
4
5
6
7
8
virsh \
net-define \
--file net_bridge-default.xml
virsh \
net-autostart vbridge0
virsh \
net-start vbridge0
1
2
3
4
5
6
7
8
virsh \
attach-interface \
--domain openshift1 \
--source vbridge0 \
--type network \
--model virtio \
--config \
--live
1
2
3
4
5
ip link add veth0 type veth peer name veth1
ifconfig veth0 up
ifconfig veth1 up
brctl addif br0 veth0
brctl addif vbridge0 veth1
  • Anmerkung: Es gibt keine bemerkbaren Performance Engpässe mit der Lösung! Der Verdacht kam auf, als reposync (RedHat Repositories) zeitweise nur mit 40KB/s lief, zeitweise dann wieder mit 500 KB/s. Also alles in Ordnung!
1
virsh dumpxml openshift1

File: virsh dumpxml openshift1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
<domain type='kvm' id='12'>
<name>openshift1</name>
<uuid>33c4f33c-d77a-4990-8f81-0b3b7b110fae</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Nehalem</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/vm-images/openshift1.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/openshift1-add-001.img'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:c0:6f:eb'/>
<source network='vbridge0' bridge='vbridge0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/2'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/2'>
<source path='/dev/pts/2'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-12-openshift1/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='spice' port='5900' autoport='yes' listen='0.0.0.0' defaultMode='insecure'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='qxl' ram='65536' vram='16384' vgamem='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c13,c1021</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c13,c1021</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>

OpenShift Bash Completion

1
2
3
4
cd /etc/bash_completion.d
wget https://raw.githubusercontent.com/openshift/origin/master/contrib/completions/bash/oadm
wget https://raw.githubusercontent.com/openshift/origin/master/contrib/completions/bash/oc
wget https://raw.githubusercontent.com/openshift/origin/master/contrib/completions/bash/openshift

Configure OpenShift’s Internal-Registry

1
oadm registry --config=openshift1.kvm.hochguertel.local.config/master/admin.kubeconfig --service-account=registry

Log:

1
2
3
4
5
6
7
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oadm registry --config=openshift1.kvm.hochguertel.local.config/master/admin.kubeconfig --service-account=registry
--> Creating registry registry ...
serviceaccount "registry" created
clusterrolebinding "registry-registry-role" created
deploymentconfig "docker-registry" created
service "docker-registry" created
--> Success

Added Docker registry in 'project/default/overview'

https://github.com/openshift/origin/issues/10973

OpenShift-Host5.png
http://appagile.io/2017/03/29/how-to-push-a-docker-image-from-docker-registry-to-openshift-registry/
https://stackoverflow.com/questions/36655281/openshift-origin-run-app-against-insecure-registry-yields-stuck-pod-with-error

RedHat Reposync - Local RedHat Repositories:

1
echo "exclude=firefox*" >> /etc/yum.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3
# This is the default, if you make this bigger yum won't see if the metadata
# is newer on the remote and so you'll "gain" the bandwidth of not having to
# download the new metadata and "pay" for it by yum not having correct
# information.
# It is esp. important, to have correct metadata, for distributions like
# Fedora which don't keep old packages around. If you don't like this checking
# interupting your command line usage, it's much better to have something
# manually check the metadata once an hour (yum-updatesd will do this).
# metadata_expire=90m
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d
exclude=firefox* *i686* gimp* *kde* *gnome* *xfce* *openjdk* *X11* *kate* *kdevelop* *gedit* *gstreamer* *gtk* *gutenprint* *gwenview* *isdn* *k3b* *kaccessible* *kactivities* *kamera* *kwin* *kwallet* *libreoffice* *openoffice* *libre* *mesa* *libGL* *mp3* *nautilus* *nepomuk* *widget* *icon* *theme* *okular* *pcp* *pidgin* *plexus* *plymouth* *audio* *qt* *rhythmbox* *sane* *desktop* *sip* *soprano* *sox* *telepathy* *texlive* *tigervnc* *thai* *totem* *wireshark* *wav* *xchat* *xcb* *xorg* *xulrunner*

Fix Docker, Firewalld and NetworkManager Conflict:

https://github.com/moby/moby/issues/16137 https://unix.stackexchange.com/questions/199966/how-to-configure-centos-7-firewalld-to-allow-docker-containers-free-access-to-th https://github.com/moby/moby/issues/16137 Fix is in Upstream available, time for `yum update` ?!? OpenShift-Host7.png
1
2
3
4
5
6
nmcli connection modify docker0 connection.zone trusted
systemctl stop NetworkManager.service
firewall-cmd --permanent --zone=trusted --change-interface=docker0
systemctl start NetworkManager.service
nmcli connection modify docker0 connection.zone trusted
systemctl restart docker.service

Without this, Docker Container’s can’t access the external network! No github.com or docker.io and even no hochguertel.local, without adding docker0 to the trusted zones.

PUSHING APPLICATION IMAGES TO AN EXTERNAL REGISTRY

  • Tutorial: Blog.Openshift.com - Pushing application images to an external registry
  • Zugangsdaten für Docker-Registry im OpenShift Service hinterlegen. (oc get secret)
  • Zugangsdaten dem Builder bekannt machen. (oc edit sa builder)
  • Zugangsdaten in einer Build Konfiguration (Build Vorgang) verwenden. (oc edit bc time)
    • z.b. pushen des Container Artefakt’s zur entfernten Docker-Registry (internal o. docker.io).

Step 1: create a new project:

1
oc new-project ext-image-push

Step 2: setup docker credentials as a secret:

  • Alternative way, without json/yaml docker configuration file: Use the Prefered way of creating secret crediantials, instead of read-in ~/.docker/config.json…!!
  • Use the (this) Alternative way to create new docker secrets, not only that the type will be kubernetes.io/dockercfg, even the publishing will work fine to our internal privat nexus3 registry with https and insecure!
1
2
3
4
5
6
7
oc \
secrets \
new-dockercfg hochguertel-dockercfg-httpslocal \
--docker-server=https://internal-registry.hochguertel.local/hochguertel-local/ \
--docker-username=admin \
--docker-password=<SECRET> \
--docker-email=tobias.hochguertel@googlemail.com
1
2
3
4
5
6
7
oc \
secrets \
new-dockercfg dockerhub \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL

z.B.

1
2
3
4
5
6
7
oc \
secrets \
new-dockercfg hochguertel-dockercfg-dockerio \
--docker-server=https://index.docker.io/v1/ \
--docker-username=tobiashochguertel \
--docker-password=<SECRET> \
--docker-email=tobias.hochguertel@googlemail.com

z.B.

1
2
3
4
5
6
7
oc \
secrets \
new-dockercfg hochguertel-dockercfg-httpslocal \
--docker-server=https://internal-registry.hochguertel.local/hochguertel-local/ \
--docker-username=admin \
--docker-password=<SECRET> \
--docker-email=tobias.hochguertel@googlemail.com

Log:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@openshift1 ~]# oc get secret
NAME TYPE DATA AGE
builder-dockercfg-j25vp kubernetes.io/dockercfg 1 2h
builder-token-9983v kubernetes.io/service-account-token 4 2h
builder-token-dm8kl kubernetes.io/service-account-token 4 2h
default-dockercfg-wpotu kubernetes.io/dockercfg 1 2h
default-token-n7xup kubernetes.io/service-account-token 4 2h
default-token-w2c5q kubernetes.io/service-account-token 4 2h
deployer-dockercfg-0ig26 kubernetes.io/dockercfg 1 2h
deployer-token-6s4vx kubernetes.io/service-account-token 4 2h
deployer-token-tcd7u kubernetes.io/service-account-token 4 2h
hochguertel-dockercfg-dockerio kubernetes.io/dockercfg 1 2m
hochguertel-dockercfg-httpslocal kubernetes.io/dockercfg 1 2m

Step 3: add secret to your builder service account:

  • Each build in your project runs with using the builder service account.

Get a List of Service Accounts

1
oc get sa

Log:

1
2
3
4
5
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oc get sa
NAME SECRETS AGE
builder 2 22m
default 2 22m
deployer 2 22m

1
oc edit sa builder

File: /tmp/oc-edit-fze0u.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: builder-dockercfg-j25vp
kind: ServiceAccount
metadata:
creationTimestamp: 2017-06-05T05:47:59Z
name: builder
namespace: ext-image-push
resourceVersion: "9843"
selfLink: /api/v1/namespaces/ext-image-push/serviceaccounts/builder
uid: 881bbe83-49b2-11e7-976a-525400c06feb
secrets:
- name: builder-token-dm8kl
- name: builder-dockercfg-j25vp

Add the previously created secret…

1
2
3
4
5
secrets:
...
- name: hochguertel-dockercfg-dockerio
- name: hochguertel-dockercfg-httpslocal
...

File: /tmp/oc-edit-fze0u.yaml (modified)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: builder-dockercfg-j25vp
kind: ServiceAccount
metadata:
creationTimestamp: 2017-06-05T05:47:59Z
name: builder
namespace: ext-image-push
resourceVersion: "9843"
selfLink: /api/v1/namespaces/ext-image-push/serviceaccounts/builder
uid: 881bbe83-49b2-11e7-976a-525400c06feb
secrets:
- name: builder-token-dm8kl
- name: builder-dockercfg-j25vp
- name: hochguertel-dockercfg-dockerio
- name: hochguertel-dockercfg-httpslocal

Log:

1
2
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oc edit sa builder
serviceaccount "builder" edited

Step 4: create a new build in your project:

  • I am using a Dockerfile in a git repository as an example here. Since I am pushing to DockerHub in this example, I am using a non-RHEL base image (busybox).
1
oc new-build https://github.com/VeerMuchandi/time --context-dir=busybox

Log:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oc new-build https://github.com/VeerMuchandi/time --context-dir=busybox
--> Found Docker image c75bebc (2 weeks old) from Docker Hub for "busybox"
* An image stream will be created as "busybox:latest" that will track the source image
* A Docker build using source code from https://github.com/VeerMuchandi/time will be created
* The resulting image will be pushed to image stream "time:latest"
* Every time "busybox:latest" changes a new build will be triggered
--> Creating resources with label build=time ...
imagestream "busybox" created
imagestream "time" created
buildconfig "time" created
--> Success
Build configuration "time" created and build triggered.
Run 'oc logs -f bc/time' to stream the build progress.

What happens:

  • Kubernetes creates an ImageStream for busybox within this project and downloads the busybox container image (from DockerHub in this case, as I am using it from DockerHub).
  • It also creates an ImageStream time to push the resultant application image to, once the build is complete.
  • We also have a BuildConfiguration with name time that has specification on how the build is done.

Get all buildconfigs in your project:

  • Note: bc is the short form for buildconfig.
    1
    oc get bc

Log:

1
2
3
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oc get bc
NAME TYPE FROM LATEST
time Docker Git 1

Step 5: edit buildconfig to push to your docker registry

1
oc edit bc time

File: /tmp/oc-edit-6sm5w.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewBuild
creationTimestamp: 2017-06-05T06:19:36Z
labels:
build: time
name: time
namespace: ext-image-push
resourceVersion: "10229"
selfLink: /oapi/v1/namespaces/ext-image-push/buildconfigs/time
uid: f2a3a2c3-49b6-11e7-976a-525400c06feb
spec:
output:
to:
kind: ImageStreamTag
name: time:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
contextDir: busybox
git:
uri: https://github.com/VeerMuchandi/time
type: Git
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: busybox:latest
type: Docker
triggers:
- github:
secret: xm2RnmxPVklO6N98wO7B
type: GitHub
- generic:
secret: iqniCG_5u6csu9baMw5P
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558
type: ImageChange
status:
lastVersion: 1

Locate the following lines in your editor:

1
2
3
4
5
spec:
output:
to:
kind: ImageStreamTag
name: time:latest

  • Note: this tells the builder: to push the output, to the ImageStream time with with ImageStreamTag of time:latest

Change it to push to your external docker registry internal-registry.hochguertel.local

1
2
3
4
5
6
7
spec:
output:
to:
kind: DockerImage
name: internal-registry.hochguertel.local/hochguertel-local/mytime:latest
pushSecret:
name: hochguertel-dockercfg-httpslocal

  • Note: we have to add also the pushSecret to tell the builder which particular secret he has to use while pushing into this registry.

Or change it to push to docker.io registry:

1
2
3
4
5
6
7
spec:
output:
to:
kind: DockerImage
name: docker.io/veermuchandi/mytime:latest
pushSecret:
name: dockerhub

  • Note: we have to add also the pushSecret to tell the builder which particular secret he has to use while pushing into this registry.

Für internal-registry.hochguertel.local:

File: /tmp/oc-edit-6sm5w.yaml (modified)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewBuild
creationTimestamp: 2017-06-05T06:19:36Z
labels:
build: time
name: time
namespace: ext-image-push
resourceVersion: "10229"
selfLink: /oapi/v1/namespaces/ext-image-push/buildconfigs/time
uid: f2a3a2c3-49b6-11e7-976a-525400c06feb
spec:
output:
to:
kind: DockerImage
name: internal-registry.hochguertel.local/hochguertel-local/mytime:latest
pushSecret:
name: hochguertel-dockercfg-httpslocal
postCommit: {}
resources: {}
runPolicy: Serial
source:
contextDir: busybox
git:
uri: https://github.com/VeerMuchandi/time
type: Git
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: busybox:latest
type: Docker
triggers:
- github:
secret: xm2RnmxPVklO6N98wO7B
type: GitHub
- generic:
secret: iqniCG_5u6csu9baMw5P
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558
type: ImageChange
status:
lastVersion: 1

Für hub.docker.io:

File: /tmp/oc-edit-6sm5w.yaml (modified)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewBuild
creationTimestamp: 2017-06-05T06:19:36Z
labels:
build: time
name: time
namespace: ext-image-push
resourceVersion: "10229"
selfLink: /oapi/v1/namespaces/ext-image-push/buildconfigs/time
uid: f2a3a2c3-49b6-11e7-976a-525400c06feb
spec:
output:
to:
kind: DockerImage
name: docker.io/hochguertel/mytime:latest
pushSecret:
name: hochguertel-dockercfg-dockerio
postCommit: {}
resources: {}
runPolicy: Serial
source:
contextDir: busybox
git:
uri: https://github.com/VeerMuchandi/time
type: Git
strategy:
dockerStrategy:
from:
kind: DockerImage
name: busybox:latest
type: Docker
triggers:
- github:
secret: xm2RnmxPVklO6N98wO7B
type: GitHub
- generic:
secret: iqniCG_5u6csu9baMw5P
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558
type: ImageChange
status:
lastVersion: 1

Step 6: run build:

1
oc start-build time
  • This invokes dockerbuild within OpenShift and pushes the resultant image to external docker registry.
  • This started a new build with name time-2 and spins up a pod with name time-2-build.

Log:

1
2
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oc start-build time
build "time-2" started

Show a list of running pods:

1
oc get pods

Log:

1
2
3
[root@openshift1 openshift-origin-server-v1.3.2-ac1d579-linux-64bit]# oc get pods
NAME READY STATUS RESTARTS AGE
time-1-build 0/1 Error 0 24m

Show Build log:

1
oc logs time-2-build -f

Works!!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@openshift1 ~]# oc logs time-29-build -f
Cloning "https://github.com/VeerMuchandi/time" ...
Commit: 39a16e6b59ad96b0a15714e54d6eaebbaa1fb164 (Update init.sh)
Author: VeerMuchandi <veer.muchandi@gmail.com>
Date: Wed Sep 14 20:45:02 2016 -0400
Checking for Docker config file for PULL_DOCKERCFG_PATH in path /var/run/secrets/openshift.io/pull
Using Docker config file /var/run/secrets/openshift.io/pull/.dockercfg
Step 1 : FROM busybox
---> c75bebcdd211
Step 2 : MAINTAINER Veer Muchandi veer@redhat.com
---> Using cache
---> 9834e1d284e9
Step 3 : ADD ./init.sh ./
---> Using cache
---> 103abd209f2a
Step 4 : EXPOSE 8080
---> Using cache
---> 3b73736fca7e
Step 5 : CMD ./init.sh
---> Using cache
---> 06e0c5e04da6
Step 6 : ENV "OPENSHIFT_BUILD_NAME" "time-29" "OPENSHIFT_BUILD_NAMESPACE" "ext-image-push" "OPENSHIFT_BUILD_SOURCE" "https://github.com/VeerMuchandi/time" "OPENSHIFT_BUILD_COMMIT" "39a16e6b59ad96b0a15714e54d6eaebbaa1fb164"
---> Running in b11dc4099c44
---> bb13ddd4414f
Removing intermediate container b11dc4099c44
Step 7 : LABEL "io.openshift.build.commit.ref" "master" "io.openshift.build.commit.message" "Update init.sh" "io.openshift.build.source-location" "https://github.com/VeerMuchandi/time" "io.openshift.build.source-context-dir" "busybox" "io.openshift.build.commit.author" "VeerMuchandi \u003cveer.muchandi@gmail.com\u003e" "io.openshift.build.commit.date" "Wed Sep 14 20:45:02 2016 -0400" "io.openshift.build.commit.id" "39a16e6b59ad96b0a15714e54d6eaebbaa1fb164"
---> Running in 2b7201e0da89
---> 4516242fe53a
Removing intermediate container 2b7201e0da89
Successfully built 4516242fe53a
Pushing image internal-registry.hochguertel.local/hochguertel-local/mytime:latest ...
Pushed 0/2 layers, 16% complete
Pushed 1/2 layers, 100% complete
Pushed 2/2 layers, 100% complete
Push successful

Step 7: test builded container artifact:

1
2
docker run -p 8080:8080 -d registry.hochguertel.local/hochguertel-local/mytime:latest
curl http://localhost:8080

Log:

1
2
3
4
[root@openshift1 ~]# docker run -p 8080:8080 -d registry.hochguertel.local/hochguertel-local/mytime:latest
dead6136b726ad128aade1e8dd7f405a67d0e0234336664c64e5df3d03f07c2f
[root@openshift1 ~]# curl http://localhost:8080
Mon Jun 5 11:47:58 UTC 2017

  • http://localhost:8080

Some Notes and Hints:

1
oc expose service docker-registry -n default
Docker Machine on Windows, with insecure-registry! https://docs.openshift.com/container-platform/3.3/dev_guide/managing_images.html#insecure-registries https://docs.openshift.com/container-platform/3.3/dev_guide/managing_images.html#insecure-registries
0%