Monday, July 29, 2019

6 - Setup, Chroot, Console, and Automation



Toucan Linux Project - 6


A Roll Your Own Distribution

Goal 1 – Base System
Stage 2 – Building the Base System

We now have an intermediate system installed in $LFS/tools that contains a full tool chain to compile all the programs we intend to install in the base system. This system won’t do much more than boot, initialize, and provide simple stand Unix programs for command line use. Our goal of the first application is still angband, a console game that will make good use of the console through the ncurses library. If you’re an angband player you’ll appreciate this, if not, then you might enjoy it once you’ve beat the learning curve. It can at least occupy you will you wait for long compiles on the host. Either way, it is a target that will test the base system.

In this stage we will build all the applications and install them in the directory structure of the host. To do this we will use the chroot command to start a shell that will see the $LFS directory (/mnt/lfs) as the root directory. Within this directory must be all the tools necessary to build the base system and that is what we created in $LFS/tools by creating the intermediate system. After this stage we can even delete the intermediate system if we want (or tar it up as a way to build another system on separate hardware or new partitions.)

Before we can begin we have to setup the directory because we need a working set of devices in /dev and the other virtual filesystems mounted. Since our base filesystem only contains /tools we need to do some configuration work first.

Step 1 – Make Basic Directories
The installation of the various programs will make most of the familiar directories such as /bin, /usr, /lib, and /share. But some directories must be manually made because they are the mount points for the virtual filesystems devfs, tempfs, and proc. Run the following as the root user (unless otherwise noted all commands in this stage are ran by the root user) to create the mount points:

mkdir -pv $LFS/{dev,proc,run,sys}

If you are unfamiliar with the use of braces in bash it creates a list of comma separated elements that are used one at a time with the command. The above is the equivalent of the following:

mkdir -pf $LFS/dev && mkdir -pv $LFS/proc && mkdir $LFS/run && mkdir $LFS/sys

Step 2 – Create Hard Device Nodes
Two devices must be present in the /dev directory in order to boot. They must be device nodes because they are required by the kernel before the device virtual filesystem becomes available (devfs). The devices needed are the console and null. Make them using:

mknod -m 600 $LFS/dev/console c 5 1
mknod -m 600 $LFS/dev/null c 1 3


This will also allow the system to boot with the init=/bin/bash as a way to launch a shell for rescue. In the past each device node would have to be made using mknod and we would need to know the major and minor number of each device and the standard name for it. For disk drives there used to be such names as hda, hdb, as well as the now familiar sda, sdb, etc. Though they look like files they are actually device nodes that use an inode in the filesystem to hold a special file that interfaces with a piece of hardware. To the kernel each device is simply specified as a major number (type of hardware such as “hard drive on the EISA bus”, “hard drive on the SCSI bus,” “floppy disk,” “serial terminal,” “device on the universal serial bus” for example) and the minor number which meant “which one” of those devices, so 0 for the first, 1 for second, 2 for the third for example.
You an see these device numbers by using the following:

$ ls -l /dev
total 0
crw------- 1 root root 10, 58 Jul 23 06:09 acpi_thermal_rel
drwxr-xr-x 2 root root 280 Jul 23 06:08 block
drwxr-xr-x 2 root root 80 Jul 23 06:08 bsg
drwxr-xr-x 3 root root 60 Jul 23 06:08 bus
drwxr-xr-x 2 root root 4060 Jul 23 06:09 char
crw------- 1 root root 5, 1 Jul 23 06:09 console
lrwxrwxrwx 1 root root 11 Jul 23 06:08 core -> /proc/kcore
drwxr-xr-x 2 root root 60 Jul 23 06:08 cpu
crw------- 1 root root 10, 62 Jul 23 06:09 cpu_dma_latency
drwxr-xr-x 7 root root 140 Jul 23 06:08 disk
drwxr-xr-x 3 root root 100 Jul 23 06:09 dri
crw-rw---- 1 root video 29, 0 Jul 23 06:09 fb0
lrwxrwxrwx 1 root root 13 Jul 23 06:08 fd -> /proc/self/fd
crw-rw-rw- 1 root root 1, 7 Jul 23 06:09 full
crw-rw-rw- 1 root root 10, 229 Jul 23 06:09 fuse
crw------- 1 root root 248, 0 Jul 23 06:09 hidraw0
crw------- 1 root root 248, 1 Jul 23 06:09 hidraw1
crw------- 1 root root 248, 2 Jul 23 06:09 hidraw2
crw------- 1 root root 248, 3 Jul 23 06:09 hidraw3
crw------- 1 root root 10, 228 Jul 23 06:09 hpet
crw------- 1 root root 240, 0 Jul 23 06:09 iio:device0
prw------- 1 root root 0 Jul 23 06:08 initctl
drwxr-xr-x 4 root root 600 Jul 23 06:09 input
crw-r--r-- 1 root root 1, 11 Jul 23 06:09 kmsg
srw-rw-rw- 1 root root 0 Jul 23 06:09 log
drwxr-xr-x 2 root root 60 Jul 23 06:08 mapper
crw------- 1 root root 10, 227 Jul 23 06:09 mcelog
crw-rw---- 1 root video 241, 0 Jul 23 06:09 media0
crw-rw---- 1 root video 241, 1 Jul 23 06:09 media1
crw------- 1 root root 245, 0 Jul 23 06:09 mei0
crw-r----- 1 root kmem 1, 1 Jul 23 06:09 mem


With ls -l where the size and modified date of a file is printed with a device node, ls ouputs the major and minor number. At the top of my list was a number of HID (human input devices) devices, I cut so we could see better. Note the last one is device 1,1 which is kernel memory – even memory looks like a file for reading.

The Console
The console is special because when Unix was first created there was a single serial device that the kernel should be guaranteed to be able to output text. This was known as “the console” and if the kernel was booting and experienced an error, the clues to fixing that error would be printed to the console—and yes I mean printed. Most hardware had a special serial port for connecting just the console. In the early days, as seen in this very famous picture of Ken Thompson and Dennis Ritchie, 

it was a teletype, a type of terminal that output lines of text on a roll of paper and had a keyboard for input that, as the operator typed, also printed that input. This device is major number 5, minor number 1. As you can see above when the device node was created it was specified as a character type (“c”), major number 5, and minor number 1. For Linux that is the virtual terminal that looks like a text screen. You can see it by pressing “CTRL-ALT-F1” but before you do you’ll need to press “CTRL-ALT-F7” to return (though this might be different on various distros so if CTRL-ALT-F7 doesn’t work, try CTRL-ALT-F9 and move backwards through the function keys). If you do this you’ll probably see some text above a login prompt. I am guessing if you are attempting TTLP you know this, but just in case so very brave souls are following along I’ve decided to cover this.

This is Linux’s version of the console. It assumes there must be a video card of some sort attached to the system (it can boot without one though in “blind mode”) and the console code in the kernel will create a virtual terminal using the video card. It is most likely driven by what is called a “frame buffer” for most systems but in older times if the video circuitry could only provide an 80x25 text screen Linux would support that.

Take a look at the /etc/initab from the last article. Right in the middle you’ll see this:

1:2345:respawn:/sbin/agetty --noclear tty1 9600
2:2345:respawn:/sbin/agetty tty2 9600
3:2345:respawn:/sbin/agetty tty3 9600
4:2345:respawn:/sbin/agetty tty4 9600
5:2345:respawn:/sbin/agetty tty5 9600
6:2345:respawn:/sbin/agetty tty6 9600 



That sections tells init to run a program called agetty located in the secure bin directory (/sbin) with some parameters. You’ll see that in my case, there are six of them and this is fairly standard. Each specifies a device using “tty” which standards for “teletype.” The program agetty is a simple program that opens a tty port, prints the contents of a file called /etc/issue, and prompts and then runs the /bin/login command which handles whatever is used for a text mode command line login procedure. The above lines creates six terminals which Linux maps to the virtual terminals and uses the CTRL-ALT-F1 – F6. Here’s the listing of my tty first 10 tty devices:

$ ls -l /dev/tty?
crw--w---- 1 root tty 4, 0 Jul 23 06:09 /dev/tty0
crw--w---- 1 root tty 4, 1 Jul 23 06:09 /dev/tty1
crw--w---- 1 root tty 4, 2 Jul 23 06:09 /dev/tty2
crw--w---- 1 root tty 4, 3 Jul 23 06:09 /dev/tty3
crw--w---- 1 root tty 4, 4 Jul 23 06:09 /dev/tty4
crw--w---- 1 root tty 4, 5 Jul 23 06:09 /dev/tty5
crw--w---- 1 root tty 4, 6 Jul 23 06:09 /dev/tty6
crw--w---- 1 mstute mstute 4, 7 Jul 23 06:09 /dev/tty7
crw--w---- 1 root tty 4, 8 Jul 23 06:09 /dev/tty8
crw--w---- 1 root tty 4, 9 Jul 23 06:09 /dev/tty9

As you can see the root user owns them all but device 7 (tty7) which is owned by me since this is my system and I am logged in. For you it will be the user name you used to login into your system. For MK Linux it will be “demo.” When X Windows starts (the graphical component of Unix) it needs a terminal as well and it will start by using tty7 which is mapped to CTRL-ALT-F7. When I login through X I became the owner of the tty device so I can read and write to it. Some distros of Linux will bring up fewer virtual terminals and possibly start X on a lower number, but most versions of Linux will default to /dev/tty7 for X. For X this device is still used for keyboard (and mouse indirectly) input. Like all things in Linux this can be configured so we might as well follow it to the bottom. On my TTLP system I use SLiM as the X display manager.

A display manager handles multiple X window displays using a protocol called X Display Manger Control Protocol. It’s job is similar to init, agetty, and login for text terminals. It should prompt for a login and password (other other authentication type), authenticate the user, and then start a session, In command line mode the initial shell (/bin/sh) is considered the session process (lifetime of process – when it goes away the user is considered logged out and the OS can cull all the resources.) For X this is generally a window manager or, in the past, a terminal emulator. When you close this process, your session is over and you are logged out. Examples of of display managers are LightDM, XDM, SLiM, SDDM, and GDM. I use SLiM and the configuration file to select the display manager is /etc/conf.d/xdm. It’s contents are:

# We always try and start X on a static VT. The various DMs normally default
# to using VT7. If you wish to use the xdm init script, then you should ensure
# that the VT checked is the same VT your DM wants to use. We do this check to
# ensure that you haven't accidentally configured something to run on the VT
# in your /etc/inittab file so that you don't get a dead keyboard.
CHECKVT=7


# What display manager do you use ? [ xdm | gdm | sddm | gpe | lightdm | entrance ]
# NOTE: If this is set in /etc/rc.conf, that setting will override this one.
DISPLAYMANAGER="slim" 


This shows that the display manager will by SliM but it also shows why tty7 is the one chosen. The CHECKVT=7 tells which virtual terminal to try as the first available. If a distro only uses three virtual terminals, this might be set to four and CTRL-ALT-F4 would be the X window session.

Step 3 – Mount virtual filesystems
If you can’t do all the following steps in one session, that is okay. But you’ll need to use the chroot command to enter into the host and remount the virtual filesystems each time. This is necessary because some of the commands depend on them and some configuration scripts use information published in /sys and /proc to determine how to configure the software for the system. Create a script $LFS/sources to mount them:

cat > $LFS/sources/mount_vfs << EOF
mount -v --bind /dev $LFS/dev
mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
EOF



Make it executable:

chmod 750 $LFS/sources/mount_vfs


and finally run it:

$LFS/sources/mount_vfs


If you shutdown and resume this stage later, you will need to execute that script after mounting the target’s filesystem on the host.

Step 4 – Chroot Into the Target
Now we began building the software of the system itself. You will need the network working on the host before you run this command. Until the network is fully up on the base system (many steps from now) we’ll need to use the network of the host as the network for the target.
Make another script to handle the change of root to enter into the target’s filesystem:

cat << EOF > $LFS/sources/chg2tgt
chroot $LFS /tools/bin/env -i \
HOME=/root TERM=$TERM PS1=’(ttlp chroot) \u:\w\ $’ \
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin \
/tools/bin/bash –login +h
EOF
chmod 750 $LFS/sources/ch2tgt



This uses the -i option of env command to clear all variables from the environment and then populate it only with the ones we want: HOME, TERM, PS1, and PATH. Notice that /tools/bin is now last in the path. As the new commands are built they will be installed in /bin, /sbin, /usr/bin, and /usr/sbin and we want the shell to start using them. Since bash will remember the location of commands and not search the path again we use the +h option to turn off hashing. This ensures it will search the PATH to find executables and find the new ones as we create them.

Now ran the chroot script:

$LFS/sources/ch2tgt 


We are now inside the target using the intermediate tools installed in /tools. All commands now will operate only on the target and not the host. It is also fully dependent on the host. To build that target the same process that built the intermediate system is used, namely unarchive a package, configure it, compile it, and install it. It is tedious and for LFS it is recommended that you perform each one manually as part of the learning process.

But in The Toucan Linux Project we are more interested in getting our base system complete so we can begin the greater work of extending it to a more useable system. For this reason we will write a bit of code to do the bulk of the work for most of the packages, leaving the remaining ones to do manually. Later we will extend this work into a package manager that we can use to maintain the system and compile everything with more advanced optimization options. Lastly, we’ll use a system of two partitions to allow experiments to be performed safely.

Step 5 – Setting Up the Target
We need to do some preparation work before we begin. The following commands will create the basic directory structure on the host:

mkdir -pv /{bin,boot,etc/{opt,sysconfig},home,lib/firmware,mnt,opt}
mkdir -pv /{media/{floppy,cdrom},sbin,srv,var}
install -dv -m 0750 /root
install -dv -m 1777 /tmp /var/tmp
mkdir -pv /usr/{,local/}{bin,include,lib,sbin,src}
mkdir -pv /usr/{,local/}share/{color,dict,doc,info,locale,man}
mkdir -v /usr/{,local/}share/{misc,terminfo,zoneinfo}
mkdir -v /usr/libexec
mkdir -pv /usr/{,local/}share/man/man{1..8}
mkdir -v /usr/lib/pkgconfig


case $(uname -m) in
x86_64) mkdir -v /lib64 ;;
esac


mkdir -v /var/{log,mail,spool}
ln -sv /run /var/run
ln -sv /run/lock /var/lock
mkdir -pv /var/{opt,cache,lib/{color,misc,locate},local} 


This is directly from LFS. Unlike LFS we will make heavy use of /usr/local and almost no use of /opt. In TTLP /opt is the place were bad programs go, those that don’t like to play nice with others, kind of like jail.

Step 6 – Additional Configuration
Notice that bash doesn’t have a name for the user in the prompt, it says “I have no name!”. That’s because there is no user database. We have some more work to do, and it is contained in LFS
It is replicated here to make matters easier. Be sure to check the link for the reasoning behind this work, it is basic, but essential, in nature.

Make essential links:

ln -sv /tools/bin/{bash,cat,chmod,dd,echo,ln,mkdir,pwd,rm,stty,touch} /bin
ln -sv /tools/bin/{env,install,perl,printf} /usr/bin
ln -sv /tools/lib/libgcc_s.so{,.1} /usr/lib
ln -sv /tools/lib/libstdc++.{a,so{,.6}} /usr/lib


ln -sv bash /bin/sh
ln -sv /proc/self/mounts /etc/mtab 


Create the user database:

cat > /etc/passwd << "EOF"
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/dev/null:/bin/false
daemon:x:6:6:Daemon User:/dev/null:/bin/false
messagebus:x:18:18:D-Bus Message Daemon User:/var/run/dbus:/bin/false
nobody:x:99:99:Unprivileged User:/dev/null:/bin/false
EOF



Create the groups file (we’ll stick with the old school standards):

cat > /etc/group << "EOF"
root:x:0:
bin:x:1:daemon
sys:x:2:
kmem:x:3:
tape:x:4:
tty:x:5:
daemon:x:6:
floppy:x:7:
disk:x:8:
lp:x:9:
dialout:x:10:
audio:x:11:
video:x:12:
utmp:x:13:
usb:x:14:
cdrom:x:15:
adm:x:16:
messagebus:x:18:
input:x:24:
mail:x:34:
kvm:x:61:
wheel:x:97:
nogroup:x:99:
users:x:999:
EOF



Restart bash to allow it to find the user name:


exec /tools/bin/bash --login +h

Lastly, create some log files the system startup programs need:

touch /var/log/{btmp,lastlog,faillog,wtmp}
chgrp -v utmp /var/log/lastlog
chmod -v 664 /var/log/lastlog
chmod -v 600 /var/log/btmp 


Step 7 – Automating the Build Procedure
A lot of this packages will use the same basic procedure for installation that can be repeated. To automate those we’ll create a basic script to handle the bulk of the work. We’ll build all packages of the LFS system and add a few of our own. These will forever be considered the “base” system and contain the tools we need as a minimum set of tools to maintain the system. They will be installed in the root under /bin, /sbin, /usr/bin, and /usr/sbin. Libraries will be installed in /lib and /usr/lib, shared data to /usr/share, and base configurations to /etc. The LFS book will recommend using the test procedures of the packages to verify they work, usually with make check or make test. For some packages this is accurate, but since we are building a system and many packages think they are building on a complete system not a partial system, many tests will fail. Some of them take a lot of time. There are some that running the tests is essential. The ones that are important I will point out here t be sure you take the time, the others you can skip if you want though I recommend you do them. Since part of building a high performance experimental system is running the tests after compilation, we’ll need to do that as we test various options. If you ran all the tests to be sure the base is stable, it’s a good foundation for the experimental work. However, since we are using the experimental version of LFS be prepared for some tests that fail that are not noted in the LFS book.

The basic idea to build each package in six steps:
1) Unarchive it
2) Configure it
3) Build it with make
4) Install it, usually with make install
5) Test the install
6) Clean up

You can use the some aliases we created before: ut and del if you choose to do this manually. They will save you some keystrokes.

For now, we’ll build a small Perl script that will do the bulk of the work. This will include running the checks. Later we can use this code in the package manager. The theory of operation is to have a standard configuration script that will do the basic configuration as many programs in the base system as possible which is simply to change the install prefix to /usr since by convention it will pick /usr/local/.

./configure –prefix=/usr 


Some packages require a separate build directory, most do not. This part of the process is generally just

make install 


Others will need some tweaking on the tests which is generally:

make check


Yet others require a bit of work on the install, though most of the time it is

make install


Because there are so may variations, we will have to create a file for each package that requires differences. This is okay, because in our package manager we want absolute control over each and every package anyway, right down the the compiler options used and even how to link it. But we’ll have default scripts for those that don’t need changes.

Start by making four default scripts in a directory underneath the root. We’ll name this directory cci for configure-compile-install:

mkdir ~/cci
cd ~/cci



Now we’ll make four files to contain the default commands:

echo “./configure –prefix=/usr” > config
echo ‘make -j $CCI_CPUS’ > compile
echo “make check” > test
echo “make install” > install
echo “# NOP” > null 


That gives us four files named config, compile, test, and install. The compile file uses an environment variable called CCI_CPUS which should be set to the number of processors on the system. Let’s add that to root’s configuration:

cd
echo “export CCI_CPUS=`cat /proc/cpuinfo |grep vendor | wc -l`” >> .bashrc
cat .bashrc



At the end of .bashrc you’ll find a line that exports CCI_CPUS equal to the number of processors on your system. If for any reason you don’t want to use all the processors to compile, then change this to a lower number.

Now we’ll start our Perl script to do this work. It starts with a Perl standard to require strict variable scoping to ensure we specify the scope of our variables. It will also make sure we don’t mistype a variable name somewhere in the program.
The program will accept four arguments that will override the default configuration. It will use -c for the configuration file, -m for the compile (make) file, -t for the test file, and -i for the install file. If these aren’t specified it will use the default files automatically so that we need to create as few extra files as possible. When we are done we’ll have every thing we need (the data) to drive our package manager version 1.0. Here’s the script so far:

1 #!/tools/bin/perl
2
3 use strict;
4 use Getopt::Std;
5 use File::Copy;
6
7 my %opts;
8
9 #Get command line arguments
10 getopts("p:c:m:t:i:n:",\%opts) or die("Options are invalid\n");



There isn’t any need to copy and paste this as the final file will be made at the end. These snippets are simply for discussion and may still be of use to you even if you aren’t a Perl programmer. Line 1, as with any shell script, tells what program to use to interpret the script. This would normally be /usr/bin/perl but our Perl interpreter is currently installed in /tools/bin in the intermediate system. Line 3 sets up the strict requirements for references, variables, and subroutines. Line 4 sets up the standard Perl command line parser. Line imports the File::Copy (and move) routines. Line 7 declares a hash called %opts and, finally, line 10 uses the getopts subroutine to populate the %opts hash with the options on the command line.

Next we declare variables and set paths.

12 # Declare variables and set paths
13 my $package;
14 my $srcs="/sources";
15 my $build_dir="/$srcs/build";
16 my $config_dir="$srcs/configs";
17 my ($config,$test,$compile,$install);
18 my $BIN="/tools/bin";
19 my $name=$opts{n}; 


The $package variable will hold the filename of the tarball, which is are package. The $srcs variable is the path to the directory containing the source tarballs. The $build_dir is where the tarball will be exploded and the code will be compiled (this will be delete when we are done.) Underneath the source directory we will create a directory called /configs which will contain the various configuration files we will need for each package. Line 17 declares four variables. The $config variable will contain the name of the configuration file, the $test variable will contain the name of the test file, the $compile variable will contain the name of the compile file, and the $install variable will contain the name of the install file (this will all become clear shortly.) Last we’re creating a variable called $BIN to hold the path for where the executable files, if you wish to make the script more secure, out $BIN in front of all the commands used in system(). In this intermediate script we will use the perl system() function to executable commands we need in a shell. This isn’t a great practice unless you specifying exactly which by using an absolute path. We can’t do this now because we want the system to use the new programs as we make this. This is an insecure practice, but many things about building a system are insecure. If this is a great concern, disable the network will you work. We also will have to use the system() for certain functions we don’t have available in native Perl because our installation is very basic.

Another bad coding practice regarding security is to read data from an external source, such as a file, and pass it to the system() function because we aren’t sure of the contents and it can be used to inject a command. For instance, say you have a file that contains the name of directories you wish to print, one per line. It might look like this:

/bin
/usr/bin
/usr/local/bin



If the program opened the file and used the ls command through the shell get the file list you could have a problem. Suppose we put the name of the directory into a variable called $dir and did he following to capture the output:

my $var=`ls $dir`;



Seems safe enough but suppose the input file contained:

/bin && useradd hacker -p “hashedpassword”


where hashedpassword was a valid password hash then the program would create a new user with a known password if ran as root. Be aware this script is not designed for anything but our first system build. We will delete it later.
Prepare by running a few commands:

mkdir -p /sources/{configs,build}
cd ~/cci
mv config install compile test /sources/configs



To speed up the build process, we can compile on a ram disk. The virtual filesytem /tmpfs will serve our purposes nicely. Create one and mount it on /sources/build using the following:

mount -o size=250M -t tmpfs tmpfs /sources/build

This will create a ram disk that will, at most, use 250MB of space. It will automatically allocate and shrink its memory usage as we add and delete files. That is sufficient in size for anything we want to compile.
Back to the listing. The subroutine setup() is called at line 22 which will determine which scripts to use. The results are printed to the terminal for viewing.

21 ###### Main routine
22 setup();
23
24 print "Using the following to build $package:\n";
25 print " config = $config\n";
26 print " compile = $compile\n";
27 print " test = $test\n";
28 print " install = $install\n";



Now we reach the heart of the process.

30 # Explode the tar
31 my $dir=untar($package,$build_dir);
32 chdir($dir);
33 # Make the build script
34 my $script=make_script($config,$compile,$test,$install);
35 open (OUT,">cci_build") or fail();
36    print OUT $script;
37 close(OUT);
38 print "Build script is:\n$script\n";
39
40 # Execute it
41 system("bash cci_build | tee ../$package.log");
42
43 # Clean up
44 system("rm -rf $dir"); 


Line 31 calls a subroutine to explode the tar creating it in the building directory. It also returns the name of the directory created which is stored in $dir. Line 32 changes the current directory to the build directory. Line 33 uses the subroutine make_script() to return the build script using the config, compile, test, and install scripts. Lines 35 to 37 open and create the build script in the build directory, called cci_build and the contents are also printed to the terminal. At line 41 bash is called to run the script (it is not marked executable) with the results piped through tee to create a log file while printing it to the screen. Lastly, line 44 uses the rm command to delete the build directory. Lines 49 through 65 (shown in the complete listing below) create a directory under the configs directory with the package name and moves any custom scripts into the directory. These will be needed later for the full package manager. This occurs only if we provide the -n option to give the package a name.

The make_script() subroutine is passed in the four scripts as determined by the setup() subroutine, reads each in, appends it to the last, and returns. This will be the contents of the build script. The read is handled by a subroutine that uses Perl’s slurp mode to read the complete contents of a file ignoring newlines (see lines 95 – 101 in the complete listing).

86 sub make_script {
87    my $config=shift;
88    my $compile=shift;
89    my $test=shift;
90    my $install=shift;
91
92    my $script=read_in($config);
93    my $contents=read_in($compile);
94    $script.=$contents;
95    my $contents=read_in($test);
96    $script.=$contents;
97    my $contents=read_in($install);
98    $script.=$contents;
99    return($script);
100 }



The tarball is exploded using a call to tar. It is passed the name of the tarball and the build directory. Line 109 builds the command line. We want it to use the intermediate tar command until tar appears in the /bin directory later so we can’t use the $BIN variable yet. It changes to the build directory at line 127 and executes the command at line 129. To derive the package name a regular expression is used to replace the .tar.* extension with a null string. This leaves the package name and version.

121 sub untar {
122    my $package=shift;
123    my $build_dir=shift;
124
125    my $cmd="tar -xf $srcs/$package";
126    print " Changing to $build_dir\n";
127    chdir($build_dir);
128    print " Executing $cmd...\n";
129    system($cmd);
130    # Return the directory that was created
131    my $temp=$package;
132    $temp =~ s/.tar.*$//;
133    return("$build_dir/$temp");
134 }



The last bit we are interested in is the setup() subroutine. This will check the command line options for each of the four parts of the build script (config, compile, test, install) to see if they are present. If they are it will expect to find them in the configs directory. If not, it will use the default name from the configs directory. The subroutine first checks that a package file has been given with the -p option and that it exists (lines 140 – 150) and that the build directory exists (lines 152 – 156). The -c option is checked at line 159 and if present the argument is checked to be sure it is a readable file in the configs directory. If it isn’t readable we stop on error at line 160. The config script name is set to the default at 164 but if the -c option is present we set the config script to config at line 165. The same login is used for the -m option (lines 168 – 175), the -t option (lines 168 – 175) and the -i option at lines (186 – 193).

138 sub setup {
139    # Check we have a package
140    if(!$opts{p}) {
141       print "A package (tarball) must be specified\n";
142    exit;
143    }
144    $package=$opts{p};
145
146    # Check the package exists
147    if (! -f "$srcs/$package") {
148       print "Package $package not found in $srcs\n";
149       exit;
150    }
151
152    # Check that the build dir exists
153    if (! -d "$build_dir" ) {
154       print "The build directory is invalid: $build_dir\n";
155       exit;
156    }
157
158    #Verify any supplied scripts are indeed readable
159    if ($opts{c} && (! -f "$config_dir/$opts{c}")) {
160       print "Configure script $config_dir/$opts{c} doesn't exist\n";
161    exit;
162    }
163    else {
164       $config="config";
165       $config="$opts{c}" if($opts{c});
166    }
167
168    if ($opts{m} && (! -f "$config_dir/$opts{m}")) {
169       print "Compile script $config_dir/$opts{m} doesn't exist\n";
170       exit;
171    }
172    else {
173       $compile="compile";
174       $compile="$opts{m}" if($opts{m});
175    }
176
177    if ($opts{t} && (! -f "$config_dir/$opts{t}")) {
178       print "Test script $config_dir/$opts{t} doesn't exist\n";
179       exit;
180    }
181    else {
182       $test="test";
183       $test="$opts{t}" if($opts{t});
184    }
185
186    if ($opts{i} && (! -f "$config_dir/$opts{i}")) {
187       print "Install script $config_dir/$opts{i} doesn't exist\n";
188       exit;
189    }
190    else {
191       $install="install";
192       $install="$opts{i}" if($opts{i});
193    }
194 } 


The complete listing:

1 #!/tools/bin/perl
2
3 use strict;
4 use Getopt::Std;
5 use File::Copy;
6
7 my %opts;
8
9 #Get command line arguments
10 getopts("p:c:m:t:i:n:",\%opts) or die("Options are invalid\n");
11
12 # Declare variables and set paths
13 my $package;
14 my $srcs="/sources";
15 my $build_dir="/sources/build";
16 my $config_dir="$srcs/configs";
17 my ($config,$test,$compile,$install);
18 my $BIN="/tools/bin";
19 my $name=$opts{n};
20
21 ###### Main routine
22 setup();
23
24 print "Using the following to build $package:\n";
25 print " config = $config\n";
26 print " compile = $compile\n";
27 print " test = $test\n";
28 print " install = $install\n";
29
30 # Explode the tar
31 my $dir=untar($package,$build_dir);
32 chdir($dir);
33 # Make the build script
34 my $script=make_script($config,$compile,$test,$install);
35 open (OUT,">cci_build") or fail();
36    print OUT $script;
37 close(OUT);
38 print "Build script is:\n$script\n";
39
40 # Execute it
41 system("bash cci_build | tee ../$package.log");
42
43 # Clean up
44 system("rm -rf $dir");
45
46 # If name is set create the directory for package
47 # and move in the scripts into it for
48 # future use
49 if($name) {
50    print "Changing to $srcs\n";
51    chdir($srcs);
52    system("mkdir $name");
53    if($opts{c}) {
54       if($opts{c} eq "null") {
55          copy("$config_dir/$config" ,"$name/config");
56       } else {
57          move("$config_dir/$config" ,"$name/config");
58       }
59    }
60 if($opts{m}) {
61    if($opts{m} eq "null") {
62       copy("$config_dir/$compile","$name/compile");
63    } else {
64       move("$config_dir/$compile","$name/compile");
65    }
66 }
67 if($opts{t}) {
68    if($opts{t} eq "null") {
69       copy("$config_dir/$test","$name/test");
70    } else {
71       move("$config_dir/$test","$name/test");
72    }
73 }
74 if($opts{i}) {
75    if($opts{i} eq "null") {
76       copy("$config_dir/$install","$name/install");
77    } else {
78       move("$config_dir/$install","$name/install");
79    }
80  }
81 }
82 exit;
83
84 # Make the build script by reading in all files
85 # and appending in execution order
86 sub make_script {
87    my $config=shift;
88 
   my $compile=shift; 
89    my $test=shift; 
90    my $install=shift; 
91
92 
   my $script=read_in($config); 
93    my $contents=read_in($compile); 
94    $script.=$contents; 
95    my $contents=read_in($test); 
96    $script.=$contents; 
97    my $contents=read_in($install); 
98    $script.=$contents; 
99    return($script); 
100 }
101
102 # Fail but clean up
103 sub fail
104 {
105    system("rm -rf $dir") if($dir)
106    print "Fail\n";
107    exit;
108 }
109
110 # Return the complete contents of a file
111 sub read_in
112 {
113    local $/ = undef;
114    open my $fh, "$config_dir/$_[0]" or die "Can't open $_[0]: $!";
115    my $slurp = <$fh>;
116    return $slurp;
117 }
118
119 # Untar the package in the build area and return the
120 # directory it creates
121 sub untar {
122    my $package=shift;
123    my $build_dir=shift;
124
125    my $cmd="$BIN/tar -xf $srcs/$package";
126 
   print " Changing to $build_dir\n"; 
127    chdir($build_dir); 
128    print " Executing $cmd...\n"; 
129    system($cmd); 
130    # Return the directory that was created 
131    my $temp=$package; 
132    $temp =~ s/.tar.*$//; 
133    return("$build_dir/$temp"); 
134 }
135
136 # Sanity check the command line
137 # Determine scripts to use
138 sub setup {
139 
   # Check we have a package 
140    if(!$opts{p}) { 
141       print "A package (tarball) must be specified\n"; 
142       exit; 
143    
144    $package=$opts{p}; 
145
146 
   # Check the package exists 
147    if (! -f "$srcs/$package") { 
148       print "Package $package not found in $srcs\n"; 
149       exit; 
150    
151
152 # Check that the build dir exists
153 
   if (! -d "$build_dir" ) { 
154       print "The build directory is invalid: $build_dir\n"; 
155       exit; 
156    
157
158 
   #Verify any supplied scripts are indeed readable 
159    if ($opts{c} && (! -f "$config_dir/$opts{c}")) { 
160       print "Configure script $config_dir/$opts{c} doesn't exist\n"; 
161       exit; 
162    
163    else { 
164       $config="config"; 
165       $config="$opts{c}" if($opts{c}); 
166    
167
168 
   if ($opts{m} && (! -f "$config_dir/$opts{m}")) { 
169       print "Compile script $config_dir/$opts{m} doesn't exist\n"; 
170       exit; 
171    
172    else { 
173       $compile="compile"; 
174       $compile="$opts{m}" if($opts{m}); 
175    
176
177 i
   f ($opts{t} && (! -f "$config_dir/$opts{t}")) { 
178       print "Test script $config_dir/$opts{t} doesn't exist\n"; 
179       exit; 
180    } 
181    else { 
182       $test="test"; 
183       $test="$opts{t}" if($opts{t}); 
184    } 
185
186 
   if ($opts{i} && (! -f "$config_dir/$opts{i}")) { 
187       print "Install script $config_dir/$opts{i} doesn't exist\n"; 
188       exit; 
189    
190    else { 
191       $install="install"; 
192    $install="$opts{i}" if($opts{i}); 
193    
194 }

Run the following to make the script in the ~/cci directory.

cat > cci.pl << "EOF"
#!/usr/bin/perl

use strict;
use Getopt::Std;
use File::Copy;

my %opts;

#Get command line arguments
getopts("p:c:m:t:i:n:",\%opts) or die("Options are invalid\n");

# Declare variables and set paths
my $package;
my $srcs="/sources";
my $build_dir="/sources/build";
my $config_dir="$srcs/configs";
my ($config,$test,$compile,$install);
my $BIN="/tools/bin";
my $name=$opts{n};

$srcs="/usr/src/paccom/realms/base/sources";
$build_dir="/var/paccom/build";
$BIN="/bin";
$config_dir="/usr/src/paccom";

###### Main routine
setup();

print "Using the following to build $package:\n";
print "   config  = $config\n";
print "   compile = $compile\n";
print "   test    = $test\n";
print "   install = $install\n";

# Exlode the tar
my $dir=untar($package,$build_dir);
chdir($dir);
# Make the build script
my $script=make_script($config,$compile,$test,$install);
open (OUT,">cci_build") or fail();
print OUT $script;
close(OUT);
print "Build script is:\n$script\n";

# Execute it
print  "bash cci_build | tee ../$package.log";
system("bash cci_build | tee ../$package.log");

# Clean up
system("rm -rf $dir");

# If name is set create the directory for package
# and move in the scripts into it for
# future use
if($name) {
   print "Changing to $srcs\n";
   chdir($srcs);
   system("mkdir $name");
   if($opts{c}) {
      if($opts{c} eq "null") {
         copy("$config_dir/$config" ,"$name/config");
      } else {
         move("$config_dir/$config" ,"$name/config");
      }
   }
   if($opts{m}) {
      if($opts{m} eq "null") {
         copy("$config_dir/$compile","$name/compile");
      } else {
         move("$config_dir/$compile","$name/compile");
      }
   }
   if($opts{t}) {
      if($opts{t} eq "null") {
         copy("$config_dir/$test","$name/test");
      } else {
         move("$config_dir/$test","$name/test");
      }
   }
   if($opts{i}) {
      if($opts{i} eq "null") {
         copy("$config_dir/$install","$name/install");
      } else {
         move("$config_dir/$install","$name/install");
      }
   }
}
exit;

# Make the build script by reading in all files
# and appending in execution order
sub make_script {
   my $config=shift;
   my $compile=shift;
   my $test=shift;
   my $install=shift;

   my $script=read_in($config);
   my $contents=read_in($compile);
   $script.=$contents;
   my $contents=read_in($test);
   $script.=$contents;
   my $contents=read_in($install);
   $script.=$contents;
   return($script);
}

# Fail but clean up
sub fail
{
   system("rm -rf $dir") if($dir)
   print "Fail\n";
   exit;
}

# Return the complete contents of a file
sub read_in
{
    local $/ = undef;
    open my $fh, "$config_dir/$_[0]" or die "Can't open $_[0]: $!";
    my $slurp = <$fh>;
    return $slurp;
}

# Untar the package in the build area and return the
# directory it creates
sub untar {
   my $package=shift;
   my $build_dir=shift;

   my $cmd="$BIN/tar -xf $srcs/$package";
   print "   Changing to $build_dir\n";
   chdir($build_dir);
   print "   Executing $cmd...\n";
   system($cmd);
   # Return the directory that was created
   my $temp=$package;
   $temp =~ s/.tar.*$//;
   return("$build_dir/$temp");
}

# Sanity check the command line
# Determine scripts to use
sub setup {
   # Check we have a package
   if(!$opts{p}) {
      print "A package (tarball) must be specified\n";
      exit;
   }
   $package=$opts{p};

   # Check the package exists
   if (! -f "$srcs/$package") {
      print "Package $package not found in $srcs\n";
      exit;
   }

   # Check that the build dir exists
   if (! -d "$build_dir" ) {
      print "The build directory is invalid: $build_dir\n";
      exit;
   }

   #Verify any supplied scripts are indeed readable
   if ($opts{c} && (! -f "$config_dir/$opts{c}")) {
      print "Configure script $config_dir/$opts{c} doesn't exist\n";
      exit;
   }
   else {
      $config="config";
      $config="$opts{c}" if($opts{c});
   }

   if ($opts{m} && (! -f "$config_dir/$opts{m}")) {
      print "Compile script $config_dir/$opts{m} doesn't exist\n";
      exit;
      }
   else {
      $compile="compile";
      $compile="$opts{m}" if($opts{m});
   }

   if ($opts{t} && (! -f "$config_dir/$opts{t}")) {
      print "Test script $config_dir/$opts{t} doesn't exist\n";
      exit;
   }
   else {
      $test="test";
      $test="$opts{t}" if($opts{t});
   }

   if ($opts{i} && (! -f "$config_dir/$opts{i}")) {
      print "Install script $config_dir/$opts{i} doesn't exist\n";
      exit;
   }
   else {
      $install="install";
      $install="$opts{i}" if($opts{i});
   }
}
EOF

Or you can choose to download it because of the formatting issues blogger has with code.

cci.pl

With the intermediate build script built, we are ready to start building the system in earnest. We will use LFS for the guide, but instead of executing the commands, we’ll create the required scripts and use the cci.pl Perl program to do the work. We’ll being this work, next time.

Copyright (C) 2019 by Michael R Stute

No comments:

Post a Comment