Category Archives: Linux

Deploying Kubernetes Cluster on CentOS 7

It is painful to deploying a Kubernetes cluster in mainland China. The installation requires access to Google servers, which is not so easy for every one. Fortunately, there are mirrors or alternative ways. I’ll use Docker v1.13 and Kubernetes v1.11 in the article.

1. Install Docker

CentOS SCL should be enabled first.

2. Install Kubernetes

2.1 Add the Aliyun mirror for Kubernetes packages

2.2 Precheck OS environmemt

Run the init command by specify the version, the access to Google server is avoided. The script also advices you to turn off firewalld, swap, selinux and enable kernel parameters:

Open /etc/sysconfig/selinux, change enforcing to permissive.
Create /etc/sysctl.d/k8s.conf with content:

Remember to comment out swap volumes from /etc/fstab.

2.3 Pull Kubernates images

Pull the Kubernetes images from docker/docker-cn mirror maintained by anjia0532. These are minimal images required for a Kubernetes master installation.

These version numbers comes from the kubeadm init command if you cannot access Google servers. These images should be retagged to gcr.io ones before next steps, or the kubeadm command line would not find them:

Now the output of docker images looks like:

Also KUBE_REPO_PREFIX and other environment variables can be used to customize the prefix. I have no time to verify them.

2.4 Start the Kubernetes master

Run the init script again and it will success with further guidelines:

Run the mkdir/cp/chown command to enable kubectl usage. Then add the weave pod network. It may take some time, since images are pulled.

Now the master is finished, verify with the Ready status:

2.4 Start the Kubernetes node(slave)

A Kubernetes node only requires kube-proxy-amd64 and pause images, pull these ones:

Weave images can also been prefetched:

Join the node to our Kubernetes master by running the command line in the kubeadm init output:

3. Verify Kubernetes cluster status

Verify nodes with:

Verify internal pods with:

If the status of a pod is not Running, get the detailed info from:

If something goes wrong, and you cannot restore from it, simply reset the master/node:

4. Install Kubernetes Dashboard

By default, all user pods are allocated on Kubernetes nodes(slaves). Pull the dashboard image in advance on the node machine:

Install with alternative setup, since recommended setup is not so friendly in a development envronment:

Refer here for remote access:

Change type: ClusterIP to type: NodePort and save file. Next we need to check port on which Dashboard was exposed.

Now, you can access with: http://<master-ip>:31023/.
You can grant admin grant full admin privileges to Dashboard’s Service Account in the development environment for convenience:

5. Troubleshoting

In my office environment, errors occur and the coredns are always in CrashLoopBackOff status:

I Googled a lot, read answers from Stackoverflow and Github, reset iptables/docker/kubernetes, but still failed to solve it. There ARE unresolved issues like #60315. So I tried to switch to flannel network instead of weave. First, Kubernetes and weave need to be reset:

This time, initialize kubeadm and network with:

The flannel image can be pulled first:

Everything works. Also referred here.

Mount Samba Share as a Non-root User

I used to access windows share folders directly in nautilus, or mount them like:

The problem is, they can be accessed only by root. The solution is adding a simple uid option like:

See: http://wiki.centos.org/TipsAndTricks/WindowsShares

Updated June 1, 2015:

You may encounter 121 error like:

It’s a Windows side issue, set following registry value to 3. This value tells Windows to prioritize file sharing over reducing memory usage.

Reboot (or just restart the “Server” service in services.msc). Your problem should now be solved.

See: https://boinst.wordpress.com/2012/03/20/mount-cifs-cannot-allocate-memory-mounting-windows-share/

Switching to the Linuxmint 17.1 Theme

Just upgraded to Linuxmint 17.1. Themes in the distribution were greatly improved. They’ve done a better job than Ubuntu, so I switched to the mint theme.
mint17-3

No broken visual glitch any more in eclipse. And it seems the new themes include fixes for the black background color for tooltips issue. See eclipse FAQ here.

You can compare with the previous screenshot: Configuring Ubuntu Themes in Linuxmint 17. The only fix I want to apply is to make the theme look brighter. First, go to /usr/share/themes/Mint-X-Aqua. For gtk3 applications, patch with:

For gtk2 applications, patch with:

Configuring Ubuntu Themes in Linuxmint 17

Finally switched from Ubuntu to Linuxmint after the 14.04 release. The distribution installed so many packages that I would never use, and the Unity desktop rendered slow on my old thinkpad 🙁

After trying Linuxmint 17 in VirtualBox, I found the color of its default theme is not so good. The mixture of grey and light green, sometimes made it a bit hard to detect borders. It also made me feel comfortless when using eclipse:

mint17-1

So I managed to reuse the default theme of Ubuntu within the cinnamon desktop from Linuxmint:

mint17-2

Here’s what I did:

This installs the Ubuntu themes. Now edit the theme to add support for Nemo:

Add one line to the end of the file:

Create the new nemo.css file:

Replace all “nautilus” with “nemo”, “Nautilus” with “Nemo”:

Updated Aug 14: Alternative color in Nemo is not available. It seems to be a bug(LP#945430) in the ubuntu theme.

Now open your “Themes” configuration, go to “Other settings” tab. Set “Controls” to “Ambiance”, set “Icons” to “ubuntu-mono-dark”, set “Window borders” to “Ambiance”.

Optimizing Kernel Build Time

Continue with Updating Kernel in Lucid, I want to decrease overview build time this time. My benchmark is run in Ubuntu 10.04 installed in Virtualbox. My CPU is i5-2540M at 2.6GHz.

I’m learning kernel code these days. A minimal kernel will save a lot of build time. As you see, it took 64min to build 2772 modules when running oldconfig target:

Build Time Build Modules Package Size
oldconfig 64min 2772 33MB
localmodconfig 16min 244 7MB
localmodconfig + ccache 1st time 19min 244 7MB
localmodconfig + ccache 2nd time 7min 244 7MB

Fortunately, a new build target localmodconfig was added in kernel 2.6.32 that just helps:

It runs “lsmod” to find all the modules loaded on the current running system. It will read all the Makefiles to map which CONFIG enables a module. It will read the Kconfig files to find the dependencies and selects that may be needed to support a CONFIG. Finally, it reads the .config file and removes any module “=m” that is not needed to enable the currently loaded modules. With this tool, you can strip a distro .config of all the unuseful drivers that are not needed in our machine, and it will take much less time to build the kernel.

The build time was dramatically decreased to 16min to build only 244 modules. It could still boot my VM to desktop, and everything was working fine. However, it failed to mount an *.iso file, since the module was not in lsmod when building I think. To use localmodconfig target, run:

It may end up with errors. Please ignore, a new .config file is already generated. Then remember to turn off the CONFIG_DEBUG_KERNEL option in the .config file, as mentioned in my previous article.

Then ccache is used. I downloaded the source code and built myself, since the 3.x version seems to be faster than 2.4.x version:

Default prefix(/usr/local) is used here. Last 2 lines created symbolic links(named as the compiler) to ccache, to let ccache masquerade as the compiler. This is suggested in ccache’s man page.

So why bother a compiler cache? The makefile doesn’t work?

If you ever run “make clean; make” then you can probably benefit from ccache. It is very common for developers to do a clean build of a project for a whole host of reasons, and this throws away all the information from your previous compiles. By using ccache you can get exactly the same effect as “make clean; make” but much faster. Compiler output is kept in $HOME/.ccache, by default.

The first run creates the cache, and the second benefits from the cache. That’s it.

To display ccache statistics, run: