Archive

Archive for the ‘Linux’ Category

Mount Samba Share as a Non-root User

April 9th, 2015 No comments

I used to access windows share folders directly in nautilus, or mount them like:

# mount -t cifs -o username=<your_username>,password=<your_password> //<your_server>/<your_share> /mnt/<your_local>

The problem is, they can be accessed only by root. The solution is adding a simple uid option like:

# sudo mount -t cifs -o uid=<your_uid>,username=<your_username>,password=<your_password>,domain=<your_domain> //<your_server>/<your_share> /mnt/<your_local> -vvv

See: http://wiki.centos.org/TipsAndTricks/WindowsShares

Updated June 1, 2015:

You may encounter 121 error like:

mount error(121): Remote I/O error
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

It’s a Windows side issue, set following registry value to 3. This value tells Windows to prioritize file sharing over reducing memory usage.

HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size

Reboot (or just restart the “Server” service in services.msc). Your problem should now be solved.

See: https://boinst.wordpress.com/2012/03/20/mount-cifs-cannot-allocate-memory-mounting-windows-share/

Categories: Linux Tags:

Switching to the Linuxmint 17.1 Theme

December 2nd, 2014 No comments

Just upgraded to Linuxmint 17.1. Themes in the distribution were greatly improved. They’ve done a better job than Ubuntu, so I switched to the mint theme.
mint17-3

No broken visual glitch any more in eclipse. And it seems the new themes include fixes for the black background color for tooltips issue. See eclipse FAQ here.

You can compare with the previous screenshot: Configuring Ubuntu Themes in Linuxmint 17. The only fix I want to apply is to make the theme look brighter. First, go to /usr/share/themes/Mint-X-Aqua. For gtk3 applications, patch with:

--- gtk-3.0/gtk-main.css.bak	2014-12-02 14:06:03.864745990 +0800
+++ gtk-3.0/gtk-main.css	2014-12-02 14:21:32.508879444 +0800
@@ -1,8 +1,8 @@
 /* Default Color Scheme */
 
-@define-color theme_bg_color #d6d6d6;
+@define-color theme_bg_color #e3e3e3;
 @define-color theme_fg_color #212121;
-@define-color theme_base_color #f7f7f7;
+@define-color theme_base_color #fafafa;
 @define-color theme_text_color #212121;
 @define-color theme_selected_bg_color #6cabcd;
 @define-color theme_selected_fg_color #f5f5f5;

For gtk2 applications, patch with:

--- gtk-2.0/gtkrc.bak	2014-12-02 14:22:07.798517093 +0800
+++ gtk-2.0/gtkrc	2014-12-02 14:22:26.575901978 +0800
@@ -1,6 +1,6 @@
 # These are the defined colors for the theme, you can change them in GNOME's appearance preferences.
 
-gtk_color_scheme = "bg_color:#d6d6d6\nselected_bg_color:#6cabcd\nbase_color:#F7F7F7" # Background, base.
+gtk_color_scheme = "bg_color:#e3e3e3\nselected_bg_color:#6cabcd\nbase_color:#fafafa" # Background, base.
 gtk_color_scheme = "fg_color:#212121\nselected_fg_color:#f5f5f5\ntext_color:#212121" # Foreground, text.
 gtk_color_scheme = "tooltip_bg_color:#fbeaa0\ntooltip_fg_color:#212121" # Tooltips.
 gtk_color_scheme = "link_color:#08c" # Hyperlinks
Categories: Linux Tags:

Configuring Ubuntu Themes in Linuxmint 17

August 12th, 2014 1 comment

Finally switched from Ubuntu to Linuxmint after the 14.04 release. The distribution installed so many packages that I would never use, and the Unity desktop rendered slow on my old thinkpad πŸ™

After trying Linuxmint 17 in VirtualBox, I found the color of its default theme is not so good. The mixture of grey and light green, sometimes made it a bit hard to detect borders. It also made me feel comfortless when using eclipse:

mint17-1

So I managed to reuse the default theme of Ubuntu within the cinnamon desktop from Linuxmint:

mint17-2

Here’s what I did:

# sudo apt-get install light-themes

This installs the Ubuntu themes. Now edit the theme to add support for Nemo:

# cd /usr/share/themes/Ambiance/gtk-3.0/
# sudo vi gtk-main.css

Add one line to the end of the file:

@import url("apps/nemo.css");

Create the new nemo.css file:

# sudo cp apps/nautilus.css apps/nemo.css
# sudo vi apps/nemo.css

Replace all “nautilus” with “nemo”, “Nautilus” with “Nemo”:

:%s/nautilus/nemo/g
:%s/Nautilus/Nemo/g

Updated Aug 14: Alternative color in Nemo is not available. It seems to be a bug(LP#945430) in the ubuntu theme.

Now open your “Themes” configuration, go to “Other settings” tab. Set “Controls” to “Ambiance”, set “Icons” to “ubuntu-mono-dark”, set “Window borders” to “Ambiance”.

Categories: Linux Tags:

Optimizing Kernel Build Time

October 28th, 2013 No comments

Continue with Updating Kernel in Lucid, I want to decrease overview build time this time. My benchmark is run in Ubuntu 10.04 installed in Virtualbox. My CPU is i5-2540M at 2.6GHz.

I’m learning kernel code these days. A minimal kernel will save a lot of build time. As you see, it took 64min to build 2772 modules when running oldconfig target:

Build Time Build Modules Package Size
oldconfig 64min 2772 33MB
localmodconfig 16min 244 7MB
localmodconfig + ccache 1st time 19min 244 7MB
localmodconfig + ccache 2nd time 7min 244 7MB

Fortunately, a new build target localmodconfig was added in kernel 2.6.32 that just helps:

It runs “lsmod” to find all the modules loaded on the current running system. It will read all the Makefiles to map which CONFIG enables a module. It will read the Kconfig files to find the dependencies and selects that may be needed to support a CONFIG. Finally, it reads the .config file and removes any module “=m” that is not needed to enable the currently loaded modules. With this tool, you can strip a distro .config of all the unuseful drivers that are not needed in our machine, and it will take much less time to build the kernel.

The build time was dramatically decreased to 16min to build only 244 modules. It could still boot my VM to desktop, and everything was working fine. However, it failed to mount an *.iso file, since the module was not in lsmod when building I think. To use localmodconfig target, run:

# yes '' | make localmodconfig

It may end up with errors. Please ignore, a new .config file is already generated. Then remember to turn off the CONFIG_DEBUG_KERNEL option in the .config file, as mentioned in my previous article.

Then ccache is used. I downloaded the source code and built myself, since the 3.x version seems to be faster than 2.4.x version:

# tar xzvf ccache-3.1.9.tar.gz
# cd ccache-3.1.9/
# ./configure
# make
# sudo make install
# sudo ln -s /usr/local/bin/ccache /usr/local/bin/gcc
# sudo ln -s /usr/local/bin/ccache /usr/local/bin/cc

Default prefix(/usr/local) is used here. Last 2 lines created symbolic links(named as the compiler) to ccache, to let ccache masquerade as the compiler. This is suggested in ccache’s man page.

So why bother a compiler cache? The makefile doesn’t work?

If you ever run “make clean; make” then you can probably benefit from ccache. It is very common for developers to do a clean build of a project for a whole host of reasons, and this throws away all the information from your previous compiles. By using ccache you can get exactly the same effect as “make clean; make” but much faster. Compiler output is kept in $HOME/.ccache, by default.

The first run creates the cache, and the second benefits from the cache. That’s it.

To display ccache statistics, run:

# ccache -s
cache directory                     /home/gonwan/.ccache
cache hit (direct)                  2232
cache hit (preprocessed)              14
cache miss                          2305
called for link                       49
called for preprocessing            1875
compile failed                         1
preprocessor error                     1
bad compiler arguments                 1
unsupported source language         3652
autoconf compile/link                 22
no input file                       4205
files in cache                      6874
cache size                          83.8 Mbytes
max cache size                       1.0 Gbytes
Categories: Linux Tags: ,

BIOS Boot Sequence

October 17th, 2013 No comments

First, from Intel’s manuals 3A 9.1.4:

The first instruction that is fetched and executed following a hardware reset is located at physical address FFFFFFF0H. This address is 16 bytes below the processor’s uppermost physical address. The EPROM containing the software-initialization code must be located at this address.

The address FFFFFFF0H is beyond the 1-MByte addressable range of the processor while in real-address mode. The processor is initialized to this starting address as follows. The CS register has two parts: the visible segment selector part and the hidden base address part. In real-address mode, the base address is normally formed by shifting the 16-bit segment selector value 4 bits to the left to produce a 20-bit base address. However, during a hardware reset, the segment selector in the CS register is loaded with F000H and the base address is loaded with FFFF0000H. The starting address is thus formed by adding the base address to the value in the EIP register (that is, FFFF0000 + FFF0H = FFFFFFF0H).

The first time the CS register is loaded with a new value after a hardware reset, the processor will follow the normal rule for address translation in real-address mode(that is, [CS base address = CS segment selector * 16]). To insure that the base address in the CS register remains unchanged until the EPROM based software-initialization code is completed, the code must not contain a far jump or far call or allow an interrupt to occur (which would cause the CS selector value to be changed).

Two screenshots showing instructions in address FFFFFFF0H and FFFF0H(Shadow BIOS, see below) and their jumps. The first one is showing a AMI BIOS, while the second Phoenix BIOS. High BIOS of AMI directly jumps to the shadowed one, and both high and shadowed one jump to the same address. But High BIOS of Phoenix just keeps running in high addresses. The first instruction of both BIOS after all jumps is FAh, say cli(disable interrupts). I’m not going to do more reverse engineering. πŸ™‚
biosboot_ami
biosboot_phoenix

NOTE: Main memory is not initialized yet at this time. From here:

The motherboard ensures that the instruction at the reset vector is a jump to the memory location mapped to the BIOS entry point. This jump implicitly clears the hidden base address present at power up. All of these memory locations have the right contents needed by the CPU thanks to the memory map kept by the chipset. They are all mapped to flash memory containing the BIOS since at this point the RAM modules have random crap in them.

The reset vector is simply FFFFFFF0h. Now, POST is started as described here:

POST stands for Power On Self Test. It’s a series of individual functions or routines that perform various initialization and tests of the computers hardware. BIOS starts with a series of tests of the motherboard hardware. The CPU, math coprocessor, timer IC’s, DMA controllers, and IRQ controllers. The order in which these tests are performed varies from motherboard to motherboard. Next, the BIOS will look for the presence of video ROM between memory locations C000:000h and C780:000h. If a video BIOS is found, It’s contents will be tested with a checksum test. If this test is successful, the BIOS will initialize the video adapter. It will pass controller to the video BIOS, which will inturn initialize itself and then assume controller once it’s complete. At this point, you should see things like a manufacturers logo from the video card manufacturer video card description or the video card BIOS information. Next, the BIOS will scan memory from C800:000h to DF800:000h in 2KB increments. It’s searching for any other ROM’s that might be installed in the computer, such as network adapter cards or SCSI adapter cards. If a adapter ROM is found, it’s contents are tested with a checksum test. If the tests pass, the card is initialized. Controller will be passed to each ROM for initialization then the system BIOS will resume controller after each BIOS found is done initializing. If these tests fail, you should see a error message displayed telling you “XXXX ROM Error”. The XXXX indicates the segment address where the faulty ROM was detected. Next, BIOS will begin checking memory at 0000:0472h. This address contains a flag which will tell the BIOS if the system is booting from a cold boot or warm boot. A value of 1234h at this address tells the BIOS that the system was started from a warm boot. This signature value appears in Intel little endian format, that is, the least significant byte comes first, they appear in memory as the sequence 3412. In the event of a warm boot, the BIOS will will skip the POST routines remaining. If a cold start is indicated, the remaining POST routines will be run.

NOTE: Main memory is initialized in POST. Main part of memory initialization code is complicated, and is directly provided by Intel which is known as MRC(Memory Reference Code).

There’s one step in POST called BIOS Shadowing:

Shadowing refers to the technique of copying BIOS code from slow ROM chips into faster RAM chips during boot-up so that any access to BIOS routines will be faster. DOS and other operating systems may access BIOS routines frequently. System performance is greatly improved if the BIOS is accessed from RAM rather than from a slower ROM chip.

A DRAM control register PAM0(Programmable Attribute Map) makes it possible to independently redirect reads and writes in the BIOS ROM area to main memory. The idea is to allow for RAM shadowing which allows read-access for ROMs to come from main memory whereas writes will continue to go to ROMs. Refer to Intel’s MCH datasheet for details:

This register controls the read, write, and shadowing attributes of the BIOS area from 0F0000h–0FFFFFh. The (G)MCH allows programmable memory attributes on 13 Legacy memory segments of various sizes in the 768 KB to 1 MB address range. Seven Programmable Attribute Map (PAM) Registers are used to support these features. Cacheability of these areas is controlled via the MTRR registers in the processor.

Big real mode(or unreal mode) is used to address more memory beyond 1M, as BIOS ROMs becomes larger and larger. In big real mode, one or more data segment registers have been loaded with 32-bit addresses and limits, but code segment stays unchanged:

Real Mode Big Real Mode Protected Mode
Code segment(cs) 1M 1M 4G
Data segments(ds, es, fs, gs) 1M 4G 4G

Protected mode can also refer 4G memory. But BIOS is mainly written for real mode, big real mode is a better choice for addressing.

Then, BIOS continues toΒ  find a bootable device, see wikipedia:

The BIOS selects candidate boot devices using information collected by POST and configuration information from EEPROM, CMOS RAM or, in the earliest PCs, DIP switches. Option ROMs may also influence or supplant the boot process defined by the motherboard BIOS ROM. The BIOS checks each device in order to see if it is bootable. For a disk drive or a device that logically emulates a disk drive, such as a USB Flash drive or perhaps a tape drive, to perform this check the BIOS attempts to load the first sector (boot sector) from the disk to address 7C00 hexadecimal, and checks for the boot sector signature 0x55 0xAA in the last two bytes of the sector. If the sector cannot be read (due to a missing or blank disk, or due to a hardware failure), or if the sector does not end with the boot signature, the BIOS considers the disk unbootable and proceeds to check the next device. Another device such as a network adapter attempts booting by a procedure that is defined by its option ROM (or the equivalent integrated into the motherboard BIOS ROM). The BIOS proceeds to test each device sequentially until a bootable device is found, at which time the BIOS transfers control to the loaded sector with a jump instruction to its first byte at address 7C00 hexadecimal (1 KiB below the 32 KiB mark).

After all of above, BIOS initialization is finished. It’s your turn to take control of your system from address 0000:7c00!!

Why this address? It’s not defined by Intel nor Microsoft. It was decided by IBM PC 5150 BIOS developer team(David Bradley). See here:

BIOS developer team decided 0x7C00 because:

– They wanted to leave as much room as possible for the OS to load itself within the 32KB.
– 8086/8088 used 0x0 – 0x3FF for interrupts vector, and BIOS data area was after it.
– The boot sector was 512 bytes, and stack/data area for boot program needed more 512 bytes.
– So, 0x7C00, the last 1024B of 32KB was chosen.

Categories: Linux Tags: , , ,

Updating 3.0 Kernel and Official Nvidia Driver on Ubuntu Lucid

March 8th, 2012 No comments

Ubuntu Lucid(10.04) originally ships with 2.6.32 kernel. But on my T420 thinkpad, the wireless card is not recognized and graphics card is not functional well. Then I switched to 2.6.38 backport kernel, and installed bumblebee package to utilize the Nvidia Optimus Technology. Now the 3.0.0-16 backport kernel is out, it contains the fix for “rework ASPM disable code”, and it should do a better job in power saving even using the discrete Nvidia card. Moreover, it’s the new LTS kernel, so I decided to update to the 3.0 kernel. Please follow the steps if you are interested:

1. Add X-Updates PPA

# sudo apt-add-repository ppa:ubuntu-x-swat/x-updates
# sudo apt-get update
# sudo apt-get install nvidia-current

These commands install official nvidia driver. Currently, it’s the 295.20 version.

2. Enable Nvidia Driver

# sudo update-alternatives --config gl_conf

This will let you to choose opengl engines. Select nvidia over mesa. This will also enable nvidia Xorg drivers, blacklist nouveau driver and add nvidia-xconfig into /usr/bin. You may find warnings like:

update-alternatives: warning: skip creation of /usr/lib32/vdpau/libvdpau_nvidia.so.1 because associated file /usr/lib32/nvidia-current/vdpau/libvdpau_nvidia.so.1 (of link group gl_conf) doesn't exist.
update-alternatives: warning: skip creation of /usr/lib32/libvdpau_nvidia.so because associated file /usr/lib32/nvidia-current/vdpau/libvdpau_nvidia.so (of link group gl_conf) doesn't exist.

Just ignore them, seems to be safe.

# sudo nvidia-xconfig

This will generate new /etc/X11/xorg.conf file for your Nvidia card. If you cannot find the command, the original location is: /usr/lib/nvidia-current/bin/nvidia-xconfig

3. Fix ld Bindings

# echo "/usr/lib/nvidia-current/tls" | sudo tee -a /etc/ld.so.conf.d/GL.conf > /dev/null

This just add an ld path into /etc/ld.so.conf.d/GL.conf, otherwise, glx module cannot be loaded correctly. Here’s the /etc/log/Xorg.0.log segments:

(II) LoadModule: "glx"
(II) Loading /usr/lib/xorg/extra-modules/libglx.so
dlopen: libnvidia-tls.so.295.20: cannot open shared object file: No such file or directory
(EE) Failed to load /usr/lib/xorg/extra-modules/libglx.so
(II) UnloadModule: "glx"
(EE) Failed to load module "glx" (loader failed, 7)

Now, update ld runtime bindings and reboot.

# sudo ldconfig
# sudo reboot

4. Verify

# sudo apt-get install mesa-utils
# glxinfo | grep -i opengl

If your installation is successful, the output looks like:

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVS 4200M/PCIe/SSE2
OpenGL version string: 4.2.0 NVIDIA 295.20
OpenGL shading language version string: 4.20 NVIDIA via Cg compiler
OpenGL extensions:

After installing the driver, hedgewars shows 120fps. While it used to show 4fps. It’s a great improvement. πŸ™‚

hedgewars

Categories: Linux Tags: ,

Learning Bash Scripts (3)

October 4th, 2011 No comments

This post covers the loop usage of bash shell. NOTE: read inline comments carefully πŸ™‚

1. for loop

#!/bin/bash

# loop list, last value remains
for test in Alabama Alaska Arizona Arkansas California Colorado
do
    echo The next state is $test
done
echo "The last state we visited was $test"
test=Connecticut
echo "Wait, now we're visiting $test"

# using escape or quote
for test in I don\'t know if "this'll" work
do
    echo "word: $test"
done

# loop variable & files
states="Alabama Alaska Arizona Arkansas Colorado Connecticut Delaware Florida Georgia"
statesfile=states.txt
for state in $states; do
    echo $state >> $statesfile
done
for state in `cat $statesfile`; do
    echo "Visit beautiful $state"
done
rm $statesfile

# loop directory
for file in ~/.b*; do
    if [ -d "$file" ]; then
        echo "$file is a directory"
    elif [ -f "$file" ]; then
        echo "$file is a file"
    else
        echo "$file doesn't exist"
    fi
done

# c-style syntax
for (( i = 1; i <= 10; i++ )); do
    echo "The next number is $i"
done

# IFS (internal field separator) to separator string
IFSHOLD=$IFS
IFS=$'\n'
for entry in `cat /etc/passwd`; do
    echo "Values in $entry:"
    IFS=:
    for value in $entry; do
        echo "  $value"
    done
done
IFS=$IFSHOLD

2. while loop

#!/bin/bash

var1=10
while [ $var1 -gt 0 ]; do
    echo $var1
    var1=$[ $var1 - 1 ]
done

3. until loop

#!/bin/bash

var1=100
until [ $var1 -eq 0 ]; do
    echo $var1
    var1=$[ $var1 - 25 ]
done

4. break & continue

#!/bin/bash

# break
for (( a = 1; a < 4; a++ )); do
    echo "Outer loop: $a"
    for (( b = 1; b < 100; b++ )); do
        if [ $b -eq 5 ]; then
            break
        fi
        echo "Inner loop: $b"
    done
done

# break outer loop
for (( a = 1; a < 4; a++ )); do
    echo "Outer loop: $a"
    for (( b = 1; b < 100; b++ )); do
        if [ $b -eq 5 ]; then
            break 2
        fi
        echo "Inner loop: $b"
    done
done

# continue outer loop
for (( a = 1; a <= 5; a++ )); do
    echo "Iteration $a:"
    for (( b = 1; b < 3; b++ )); do
        if [ $a -gt 2 ] && [ $a -lt 4 ]; then
            continue 2
        fi
        var3=$[ $a * $b ]
        echo "  The result of $a * $b is $var3"
    done
done

There may be times when you’re in an inner loop but need to stop the outer loop. The break command includes a single command line parameter value: break n where n indicates the level of the loop to break out of. By default, n is 1, indicating to break out of the current loop. If you set n to a value of 2, the break command will stop the next level of the outer loop.

5. redirect & pipe

Finally, you can either pipe or redirect the output of a loop within your shell script.

#!/bin/bash

testfile=testloop.txt
for (( a = 1; a < 10; a++ )); do
    echo "The number is $a"
done > $testfile
echo "The command is finished."
cat $testfile
rm $testfile
Categories: Linux Tags: , ,

Learning Bash Scripts (2)

October 4th, 2011 No comments

1. Comments

When creating a shell script file, you must specify the shell you are using in the first line of the file. The format for this is:

#!/bin/bash
# This script displays the date and who's logged on
date
who

In a normal shell script line, the pound sign(#) is used as a comment line. A comment line in a shell script isn’t processed by the shell. However, the first line of a shell script file is a special case, and the pound sign followed by the exclamation point tells the hell what shell to run the script under (yes, you can be using a bash shell and run your script using another shell).

2. Display

The echo command can display a simple text string if you add the string following the command.

#!/bin/bash
# basic usage
echo This is a test.
echo "Let's see if this'll work"
# environment variables
echo "User info for user: $USER"
echo UID: $UID
echo HOME: $HOME
echo "The cost of the item is \$15"
# user variables
days=10
guest="Katie"
echo "$guest checked in $days days ago"
days=5
guest="Jessica"
echo "$guest checked in $days days ago"
# backtip
testing=`date`
echo "The date and time are: " $testing

The echo command uses either double or single quotes to delineate text strings. If you use them within your string, you need to use one type of quote within the text and the other type to delineate the string.

Notice that the environment variables in the echo commands are replaced by their current values when the script is run. Also notice that we were able to place the $USER system variable within the double quotation marks in the first string, and the shell script was still able to figure out what we meant.

You may also see variables referenced using the format ${variable}. The extra braces around the variable name are often used to help identify the variable name from the dollar sign.

User variables can be any text string of up to 20 letters, digits, or an underscore character. User variables are case sensitive, so the variable Var1 is different from the variable var1. This little rule often gets novice script programmers in trouble.

Values are assigned to user variables using an equal sign. No spaces can appear between the variable, the equal sign, and the value (another trouble spot for novices). Here are a few examples of assigning values to user variables.

The shell script automatically determines the data type used for the variable value. Variables defined within the shell script maintain their values throughout the life of the shell script but are deleted when the shell script completes.

Just like system variables, user variables can be referenced using the dollar sign. It’s important to remember that when referencing a variable value you use the dollar sign, but when referencing the variable to assign a value to it, you do not use the dollar sign.

The backtick allows you to assign the output of a shell command to a variable.

3. Redirect I/O

>: output redirect
>>: output redirect append data
<: input redirect
<<: inline input redirect

# wc << EOF
> test string 1
> test string 2
> test string 3
> EOF
    3    9    42
# 

The inline input redirection symbol is the double less-than symbol (<<). Besides this symbol, you must specify a text marker that delineates the beginning and end of the data used for input. You can use any string value for the text marker, but it must be the same at the beginning of the data and the end of the data.

4. Math Expression

#!/bin/bash
var1=10
var2=3
var3=`expr $var1 \* $var2`
var4=$[$var1 * $var2]
var5=`expr $var1 / $var2`
var6=$[$var1 / $var2]
var7=`echo "scale=3; $var1 / $var2" | bc`
echo The result is $var3
echo The result is $var4
echo The result is $var5
echo The result is $var6
echo The result is $var7

The expr command allowed the processing of equations from the command line. Note the spaces around operator is necessary. Escape character(backslash) is used to identify any characters that may be misinterpreted by the shell before being passed to the expr command.

Bash also provides a much easier way of performing mathematical equations. In bash, when assigning a mathematical value to a variable, you can enclose the mathematical equation using a dollar sign and square brackets ($[ operation ]).

The bash shell mathematical operators support only integer arithmetic. The most popular solution uses the built-in bash calculator, called bc.

5. Structured Commands

5.1 if/else

The bash shell if statement runs the command defined on the if line. If the exit status of the command is zero (the command completed successfully), the commands listed under the then section are executed. If the exit status of the command is anything else, the then commands aren’t executed, and the bash shell moves on to the next command in the script.

#!/bin/bash
user=gonwan
user2=test2
user3=test3
# if-then
if grep $user /etc/passwd; then
    echo "The bash files for user $user are:"
    ls -a /home/$user/.b*
fi
# if-then-else
if grep $user2 /etc/passwd; then
    echo "The bash files for user $user2 are:"
    ls -a /home/$user2/.b*
else
    echo "The user name $user2 does not exist on this system"
fi
#if-then-elif-then-else
if grep $user3 /etc/passwd; then
    echo "The bash files for user $user3 are:"
    ls -a /home/$user3/.b*
elif grep $user2 /etc/passwd; then
    echo "The bash files for user $user2 are:"
    ls -a /home/$user2/.b*
else
    echo "The user name $user2 and $user3 does not exist on this system"
fi

5.2 test

The test command provides a way to test different conditions in an if-then statement. If the condition listed in the test command evaluates to true, the test command exits with a zero exit status code, making the if-then statement behave in much the same way that if-then statements work in other programming languages. If the condition is false, the test command exits with a 1, which causes the if-then statement to fail.

*) Numeric Comparisons
Comparison Description
n1 -eq n2 Check if n1 is equal to n2.
n1 -ge n2 Check if n1 is greater than or equal to n2.
n1 -gt n2 Check if n1 is greater than n2.
n1 -le n2 Check if n1 is less than or equal to n2.
n1 -lt n2 Check if n1 is less than n2.
n1 -ne n2 Check if n1 is not equal to n2.
#!/bin/bash
val1=10
val2=11
if [ $val1 -gt $val2 ]; then
    echo "$val1 is greater than $val2"
else
    echo "$val1 is less than $val2"
fi
if (( $val1 > $val2 )); then
    echo "$val1 is greater than $val2"
else
    echo "$val1 is less than $val2"
fi

However, The test command wasn’t able to handle the floating-point value.
You may also notice usage of double parentheses. It provide advanced mathematical formulas for comparisons, no escape is needed in it:

Symbol Description
val++ Post-increment
val– Post-decrement
++val Pre-increment
–val Pre-decrement
! Logical negation
∼ Bitwise negation
** Exponentiation
<< Left bitwise shift
>> Right bitwise shift
& Bitwise Boolean AND
| Bitwise Boolean OR
** Exponentiation
&& && Logical AND
|| Logical OR
*) String Comparisons
Comparison Description
str1 = str2 Check if str1 is the same as string str2.
str1 != str2 Check if str1 is not the same as str2.
str1 < str2 Check if str1 is less than str2.
str1 > str2 Check if str1 is greater than str2.
-n str1 Check if str1 has a length greater than zero.
-z str1 Check if str1 has a length of zero.

Trying to determine if one string is less than or greater than another is where things start getting tricky. There are two problems that often plague shell programmers when trying to use the greater-than or less-than features of the test command:
– The greater-than and less-than symbols must be escaped, or the shell will use them as redirection symbols, with the string values as filenames.
– The greater-than and less-than order is not the same as that used with the sort command.

#!/bin/bash
val1=ben
val2=mike
if [ $val1 \> $val2 ]; then
    echo "$val1 is greater than $val2"
else
    echo "$val1 is less than $val2"
fi
if [[ $val1 > $val2 ]]; then
    echo "$val1 is greater than $val2"
else
    echo "$val1 is less than $val2"
fi

The double bracketed expression uses the standard string comparison used in the test command. However, it provides an additional feature that the test command doesn’t, pattern matching. No escape is needed anymore.

Capitalized letters are treated as less than lowercase letters in the test command. However, when you put the same strings in a file and use the sort command, the lowercase letters appear first. This is due to the ordering technique each command uses. The test command uses standard ASCII ordering, using each character’s ASCII numeric value to determine the sort order. The sort command uses the sorting order defined for the system locale language settings. For the English language, the locale settings specify that lowercase letters appear before uppercase letters in sorted order.

While the BashFAQ said: As of bash 4.1, string comparisons using < or > respect the current locale when done in [[, but not in [ or test. In fact, [ and test have never used locale collating order even though past man pages said they did. Bash versions prior to 4.1 do not use locale collating order for [[ either. So you get opposite result when running on CentOS-5.7(bash-3.2) and Ubuntu-10.04(bash-4.1) with [[ operator. And bash-4.1 is consistent with sort command now.

5.3 case

Well, this is easy, just walk through the snippet:

#!/bin/bash
case $USER in
gonwan | barbara)
    echo "Welcome, $USER"
    echo "Please enjoy your visit"
    ;;
testing)
    echo "Special testing account"
    ;;
jessica)
    echo "Do not forget to log off when you're done"
    ;;
*)
    echo "Sorry, you are not allowed here"
    ;;
esac

All sample code are tested under CentOS-5.7 and Ubuntu-10.04.

Categories: Linux Tags: , ,

Learning Bash Scripts (1)

August 28th, 2011 No comments

In this first post of the series, some basic concepts are introduced. All information from Linux Command Line and Shell Scripting Bible, Second Edition.

1. Shell Types

There are three ways of starting a bash shell:
– As a default login shell at login time
– As an interactive shell that is not the login shell
– As a non-interactive shell to run a script

Login Shell

When you log in to the Linux system, the bash shell starts as a login shell. The login shell looks for four different startup files to process commands from. The following is the order in which the bash shell processes the files:

/etc/profile
$HOME/.bash_profile
$HOME/.bash_login
$HOME/.profile

Interactive Shell

If you start a bash shell without logging into a system (such as if you just type bash at a CLI prompt), you start what’s called an interactive shell. The interactive shell doesn’t act like the login shell, but it still provides a CLI prompt for you to enter commands.

If bash is started as an interactive shell, it doesn’t process the /etc/profile file. Instead, it checks for the .bashrc file in the user’s HOME directory.

Non-interactive Shell

Finally, the last type of shell is a non-interactive shell. This is the shell that the system starts to execute a shell script. This is different in that there isn’t a CLI prompt to worry about. However, there may still be specific startup commands you want to run each time you start a script on your system.

To accommodate that situation, the bash shell provides the BASH_ENV environment variable. When the shell starts a non-interactive shell process, it checks this environment variable for the name of a startup file to execute. If one is present, the shell executes the commands in the file.

2. Terminfo Database

The terminfo database is a set of files that identify the characteristics of various terminals that can be used on the Linux system. The Linux system stores the terminfo data for each terminal type as a separate file in the terminfo database directory. The location of this directory often varies from distribution to distribution. Some common locations are /usr/share/terminfo, /etc/terminfo, and /lib/terminfo.

Since the terminfo database files are binary, you cannot see the codes within these files. However, you can use the infocmp command to convert the binary entries into text.

The Linux shell uses the TERM environment variable to define which terminal emulation setting in the terminfo database to use for a specific session. When the TERM environment variable is set to vt100, the shell knows to use the control codes associated with the vt100 terminfo database entry for sending control codes to the terminal emulator.

3. Virtual Consoles

With modern Linux systems, when the Linux system starts it automatically creates several virtual consoles. A virtual console is a terminal session that runs in memory on the Linux system. Instead of having several dumb terminals connected to the PC, most Linux distributions start seven (or sometimes even more) virtual consoles that you can access from the single PC keyboard and monitor.

In most Linux distributions, you can access the virtual consoles using a simple keystroke combination. Usually you must hold down the Ctl+Alt key combination, and then press a function key (F1 through F8) for the virtual console you want to use. Function key F1 produces virtual console 1, key F2 produces virtual console 2, and so on.

4. Environment Variables

There are two types of environment variables in the bash shell:
– Global variables
– Local variables

Global environment variables are visible from the shell session, and from any child processes that the shell spawns. Local variables are only available in shell that creates them. This makes global environment variables useful in applications that spawn child processes that require information from the parent process.

Get

To view the global environment variables, use the printenv command.
To display the value of an individual environment variable, use the echo command. When referencing an environment variable, you must place a dollar sign($) before the environment variable name.

Unfortunately there isn’t a command that displays only local environment variables. The set command displays all of the environment variables set for a specific process. However, this also includes the global environment variables.

Set

You can assign either a numeric or a string value to an environment variable by assigning the variable to a value using the equal sign(=). It’s extremely important that there are no spaces between the environment variable name, the equal sign, and the value. If you put any spaces in the assignment, the bash shell interprets the value as a separate command.

The method used to create a global environment variable is to create a local environment variable and then export it to the global environment.

Of course, if you can create a new environment variable, it makes sense that you can also remove an existing environment variable. You can do this with the unset command.When referencing the environment variable in the unset command, remember not to use the dollar sign.

NOTE: When dealing with global environment variables, things get a little tricky. If you’re in a child process and unset a global environment variable, it applies only to the child process. The global environment variable is still available in the parent process.

Categories: Linux Tags: , ,

Installing CentOS 5.x with Just the First CD

July 30th, 2011 No comments

Since the DVD size of CentOS 5.x is largely increased(1.7G for 3.x, 2.3G for 4.x, while 4.0G for 5.x), I decided to use the CD approach. I downloaded the first CD image from one of its mirror site: http://mirrors.163.com/centos/5.6/isos/i386/.

Now, follow the official FAQ here:

– You can do a minimal install that just requires the first CD by performing the following two steps during the installation:
** During the category/task selection, deselect all package categories, and choose the “Customize now” option at the bottom of screen.
** During the customized package selection, deselect everything ( including the Base group ).
– There are reports that more than CD 1 is required in the following case:
** If you use some software raid options (this will also require CD 2 and 5)
** If you use encrypted filesystems
– When the anaconda installer notes that additional disks will be required but you desire a one CD install, the quick answer is one or more of the following approaches:
** Trim back and do a minimal install. Then once the install is up and running, pull in more packages with yum and add more options later.
– If you want to avoid using more than one CD but want to install more than just the minimal set of packages, you could also consider doing a network installation. A network installation ISO (called boot.iso) is available from the 5/os/<arch>/images/ directory on CentOS mirrors.
– This latter mode of installation, however, is only really reliable via a LAN (an Intranet installation) and not via the Internet.

From my practice, you MUST follow the de-selection order. Otherwise, it will still require other CDs. The actual installation lasts for about 1 minutes(installation of *.rpm files). After reboot, the system gives you a minimum installation with only text mode support. Now login with your root account, and make sure your network is ready. Additional components shall be installed manually using yum:

# yum groupinstall "Base" "X Window System" "GNOME Desktop Environment"

NOTE: All group names are case-sensitive.

Actually, if only “X Window System” are passed to yum, you will get a simple GUI with an xterm and an xclock after running “startx” command.

You may want to take coffee during the process. For me, about 350M contents were downloaded. Reboot when finished and add “single” option to enter single mode in GRUB menu.

Since the first CD does not install GUI contents, so the runlevel is set to 3(text mode) by default after installation. We should switch it to 5(GUI mode) by editing /etc/inittab file, Find the line and change the middle value from 3 to 5:

id:3:initdefault:

Now, we want to start the “firstboot” configuration utility to simplify our user account creation and other initial configurations. Check /etc/sysconfig/firstboot file, and make sure the value is set to “YES” like:

RUN_FIRSTBOOT=YES

If the value is “NO”, the “firstboot” utility is skipped and GDM is displayed directly. When all have been done, issue the “exit” command to return to the normal startup process. This time, the “firstboot” wizard should show.

Here is the GDM screenshot after all above steps:

centos5_gdm

PS:

In 6.x, CentOS provides LiveCD and LiveDVD that can be used also for installation. But in 5.x, they can only be used for trial experience.

In 4.x/3.x, the openoffice suite is outdated, I suggest to not install them. I also suggest to remove redundant kernels:

# For 4.x
# yum remove kernel-smp kernel-smp-devel kernel-hugemem-devel
# For 3.x
# rpm -e kernel-smp

There’s 4.9 release but no 4.9 *.iso images. The readme.txt says:

– The upstream provider did not respin media for the 4.9 release and therefore the CentOS project will also not respin our install media.
– Installs moving forward will be off the 4.8 media and an upgrade will move you from version 4.8 to version 4.9.
– We do this to maintain compatibility with 3rd party kernel drivers which are designed to be installed as part of the installation process.

Run “yum update” to update from 4.8 to 4.9. For me, about 300M contents were downloaded.

In 3.x release, I suggest to select “Kernel Development” group during installation. The 2.4.x kernel needs its source to compile kernel modules(like virtual machine addons).

Categories: Linux Tags: