Tuesday, July 30, 2019

3 -Partitioning




Toucan Linux Project - 3

A Roll Your Own Distribution

Goal 1 – Base System
Stage 1 – Building the Tool Chain

Step 3 - Partition the Disk

Partitioning a disk ought to be easy, correct? Doing it is, but choosing which way to do it is much more difficult. Neal Stephenson said in In the Beginning...Was the Command Line:

The file system on Unix machines all have the same general structure. On your flimsy operating systems, you can create directories (folders) and give them names like Frodo or My Stuff and put them pretty much anywhere you like. But under Unix the highest level—the root—of the filesystem is always designated with the single character “/” and it always contains the same set of top-level directories:
/usr
/etc
/var
/bin
/proc
/boot
/home
/root
/sbin
/lib
/tmp
and each of these directories typically has its own distinct structure of subdirectories. Note the obsessive use of abbreviations and avoidance of capital letters; this is a system invented by people to whom repetitive stress disorder is what black lung is to miners. Long names get worn down to three-letter nubbins, like stones smoothed by a river.

This is not the place to try to explain why each of the above directories exists, and what is contained in it. At first it all seems obscure. When I started using Linux I was accustomed to being able to create directories wherever I wanted and to give them whatever names struck my fancy. Under Unix you are free to do that, of course (you are free to do anything) but as you gain experience with the system you come to understand that the directories listed above were created for the best reasons and that your life will be much easier if you follow along.

Yet over time changes have been made such as adding /opt or moving /home to /usr/home or worse of all, installing everything in the wrong place (more later). In truth, the gentlemen that gave us Unix (primarily Thompson and Ritchie) used it a lot and used solely the command line because in the beginning there was the command line. It is an old operating system, having been originally bootstrapped in 1969, and there existed a wealth of users that were experienced with it by the time PCs came about in the 1980s. There was not only a reason for everything they did, but a well tested reason for everything they did. So with that said we move on to the discussion of partitioning.

Unlike Stephenson’s book, this is the place to try to explain why each of the above directories exist. It goes something like this:

/usr - originally the directory holding user home directories, its use has changed, and it now holds executables, libraries, and shared resources that are not system critical bu instead designated for “user” use

/etc - Contains configuration files and some system databases

/var - short for "variable;" a place for files that may change often, such as the storage to a database, the contents of a database, log files (usually stored in /var/log), email stored on a server, files waiting for print

/bin - Stands for "binaries" and executable file; Contains the set of utilities needed by a system administrator

/home - contains the home directories for the users; originally this was /usr by eventually changed to /home when it became apparent that data needed to to be separated from configuration, libraries and executables

/mnt - This is the default location to mount external devices like hard disk drives, or memory sticks, other mount points were created under it

/lib - This is the depository of all integral UNIX system libraries

/tmp - a place for temporary files; many systems clear this directory upon start up, enforcing it is indeed temporary

/dev - short for devices, contains file representations of every peripheral device attached to the system (though that is ot always true today)

/proc – Not part of the original Unix this is short for processes and contains information about every process (and lots of system information) publish using a virtual filesystem to make it easy to access for non-root users

/boot – Not part of original Unix, this is to store the files necessary to boot the operating system, usually the kernel and necessary modules with boot options; originally the kernel was simply a file in the root

/root – Not part of the original Unix, this is the home directory for the superuser, root; in classic Unix, root’s home directory was (no surprisingly) the root (/)

/sbin – Not part of the original Unix, this was a set of utilities useful only for the super user root and thus was not mapped into the search path for normal users.

This seemed all fine except as Unix expanded and became more widely used, other vendors started adding their programs. The question became, “Where do I put my files?” If your were using AT&T Unix and AT&T decided to upgrade the system by installing all the new binaries in a directory called /usr/bin.new, then deleting all of /usr/bin, and renaming /usr/bin.new to /usr/bin then it was no longer a safe place. Or even simpler, what if your binary used a name that AT&T later decided to use themselves? When they upgraded they would overwrite your binary with theirs. Essentially they needed a place to install binaries, libraries, and configuration files that were not part of the base operating system which was controlled by the vendor.

To solve this problem, underneath /usr AT&T created a mirror of the base directories that contained: etc, lib, include, and bin in a directory called local. This directory was meant to contain files for the “local” system, as in those not part of the operating system distribution. This worked well, except vendors didn’t like the idea of putting their binaries right along with other vendor’s binaries, or even the fact they didn’t control the whole directory. Along then came /opt (originally /vol) that allowed a vendor to create their own directory and under it, have complete control, including the ability to delete it later without fear of destroying other applications. The problem became adding the binaries to the user’s search path and the libraries to the linker search path (though in static linking this didn’t matter much). Regardless, /opt became a place for vendor additions for many versions of Unix for many years. Why? I’m not sure. It was certainly possible to create a directory under /usr/local that did the same, perhaps because that directory already had a well established and understood structure.

Now in modern Linux the distro maintainer gets to choose where to install everything. And your are now you’re own distro maintainer. There is reason to have each of those directories above have it’s own partition. There are also reasons they shouldn’t. Decision time.

/boot
The first one is easy. The boot area /boot needs to be on its own partition. The boot loader that the system calls from the BIOS (or primary boot loader) needs everything here it needs to boot the OS including all the pieces of the kernel necessary to boot it on the hardware. This is the kernel file itself, any support modules it might need, and kernel configuration and boot options. The boot loader is a program (that used to be very simple) to load the kernel and then the kernel started doing what it needed to bring up the complete system before handing off to the initial program. One job the kernel must accomplish is mounting the root directory.

Modern systems use a system called UEFI (Unified Extensible Firmware Interface) to be able to boot from a plethora of media such as hard disks, SSDs, USB memory devices, SD devices, and even over a network. Almost any general purpose computer designed after 2006 will boot using this system. The alternative is called the MBR for master boot record, which is a legacy method for the primary bootloader to load the secondary bootloader or, in older days, the operating system itself. In this case, there is a reserved area on the disk where the primary bootloader will look for code to load and execute as the secondary bootloader. This is now known as “legacy boot.”

The compressed Linux kernel will reside here along with the secondary bootloader and its configuration files. This partition doesn’t need to be too large because it shouldn’t contain more than two or three different kernels to allow safe experimenting (experimental, known good, safe standby) and the boot loader. A minimum of 150MB is suggested but 250MB is better. The EFI area is generally around 25MB, a kernel about 5-8MB and the secondary bootloader about 9MB. I never have more than about 3 kernels and my boot partition is about 100MB total. With most distros there will be a large (25-30MB) file that contains the initial RAM disk (initrd) which is actually a compressed file system itself. The Toucan Linux Project will not use an initrd unless CPU firmware is required and in that case it will be very small. If you really need disk space 35MB is plenty for a single kernel on a system that doesn’t dual boot.

If you are using a system where Linux is already installed and you are using extra disk space for The Toucan Linux Project, you don’t need another boot partition. You can use the one from the host OS unless you specifically want them separate.

swap
This is a special filesystem the kernel will use as virtual memory to swap data pages in and out of memory as needed. With a slim, trim system this won’t be necessary and is generally avoided anyway since it can lead to SHP (Serious Horrible Performance) and worse to a condition called thrashing. But the swap partition is also used to hibernate (suspend to disk) which might be useful on a laptop. I would suggest a size of 4GB minimum for a workstation and for a laptop where hibernation might be a nice feature, the minimum is the same size of the system’s RAM. If you have plenty of space make it the same size as your system RAM. If you are using a system where Linux is already installed and you are using extra disk space for The Toucan Linux Project, you don’t need another swap partition. You can use the one from the host OS.

/ (root)
Here it gets a bit harder. This is the last required partition. All the other directories can be created using the space of this partition without any problem to the system (except for performance when a system can write to multiple partitions in parallel). The real danger is that if the root partition fills up, the system might crash or simply be unable to boot after being shutdown. I would say a minimum is 15GB but that really does vary depending on what you intend to use it for. I believe in a separate root. It might be best to table this discussion until we understood more about the other directories.

/usr
I would argue to NEVER make this a separate partition. It should be part of the root drive along with /lib, /bin, and /sbin. It contains many libraries that programs will need to link. While /bin, /sbin, and /lib are intended to contain everything required to repair the system should the other partitions get trashed, there are simply too many useful tools that required libraries in /usr/lib. It only contains code, configurations, data shared across system applications, and files used for development (like C include files). The biggest single entity it will contain in The Toucan Linux Project is the kernel source code for one to three kernels—these can be quite big (around 10G required per kernel to compile) and the place to hold all the source code for other libraries and applications for TTLP.

/usr/local
This should be a separate partition under this project. The reason being that the applications that are not firmly in the base install reside here. If you have a stable system and decide to doing something experimental (which you will) then having two of these will save you a lot of headache. You will have one as the primary which will contain your stable system before you start the experiment. You will back it up to the secondary partition, then replace the primary with the secondary at the mount point. Compile. Build. Install. Test. Rinse and repeat. If the experiment fails you just remount the primary and all is well. If it works, copy the secondary to the primary and your experiment becomes “stable.” We will create two of these for this purpose in the project (for those who are aware of the overlay file system this will make even more sense).

/var
Since data will grow here as the system logs are written, there is the possibility it might fill the root partition if it is not separate. Though we will use a log management system to archive and remove old logs, so it is not essential to have it on a separate partition. Keep in mind, though, one vulnerability might be to simply cause the system to log so much that the partition fills, thus filling the root if it is not a separate partition. In The Toucan Linux Project (TTLP) we will use this as a build area for large applications (though this can be changed), so it will need extra space. We will also make sure the system logs can't fill the partition. Compiling the Chromium browser currently requires 20GB depending on the options, though 6-8 GB is more of a target for TTLP unless you intend to use the bigger browsers and office suites like LibreOffice or OpenOffice then you should expect around 20GB for the large build area. For security and system stability, /var should be a separate partition.

/home
Home should definitely be a separate partition where possible. It is the user data area and it can easily fill up just by video or sound editing and is an easy target for someone with ill intent. Since your data, videos, audio, your own programs and code if you’re a programmer, and all the configuration and data for such things as games will be here, it should be whatever space is left after you create all the other partitions. Since there is a lot of change on this drive, the filesystem will become fragmented over time making it a good target for cleaning and defragmentation.


The Don’ts
Kernel modules are found in /lib/modules in the root drive. Now suppose the root filesystem is a JFS filesystem and the kernel needs to load the jfs.ko module in order to mount the root drive (at boot the bootloader generally can only access the /boot partition so the kernel will first need to mount the root just to access the files it needs to startup). We have a problem here because the jfs.ko file resides on the root drive that we are trying to mount. The kernel will not be able to boot since it can not access the modules it needs to boot. The idea of the root (/) partition is to contain everything the system needs to fully load the kernel in a state it can use all the hardware and features of the system. It will need some libraries in /usr/lib later in the boot stages, but this really isn’t booting the kernel anymore, but booting the system and bringing it into a desired state. DO NOT under any circumstances make /bin, /lib, or /sbin into a separate partition. Under the rule of KISS, the goal of this project is NOT to require a complicated initrd to boot the system, and creating any of these partitions, including /usr, as a separate partition might break that rule. Creating one for /usr/src is okay but never /usr.

Using a GUID Partition Table

Part of the UEFI specification was a new type of partition table called GUID Partition Tables which is best known as GPT for short. UEFI removed the requirement for a Master Boot Record (MBR) since only one OS could own it at a time. If you used to dual-boot Linux and Windows, and Windows did an update it would also wipe out the bootloader (probably LILO or Grub version 1) and you’d have to repair it. UEFI allows multiple operating systems to have their own bootloaders as well as providing a standard boot method for any media like DVD, hard disk, USB drive, or SD drive. The older type of partition table is the MBR (or many others). Fortunately, the designers of the UEFI standard allowed for UEFI + MBR booting called “Legacy” booting. We now have the choice of booting UEFI for systems that support it, MBR or system that don’t, and generally UEFI+MBR hybrid for UEFI system. We will use a GPT partition table because it alleviates many issues that existed for the MBR type (4 primary partitions with a kludgy “extended” partition, 450 bytes for the secondary bootloader, etc.).

If you’re using a system old enough that it doesn’t support, UEFI you’ll need to create a very small partition as the primary partition number 1. It can be as small as 1MB in size to allow room for the boot loader. Primary partition 2 should be the swap partition. Primary partition 3 should be the root partition, and primary partition 4 should be an logical extend partition of the rest of the remaining space of the disk to allow you to create additional partitions. The instructions below assume you are using UEFI, adjust according if you are not.

Partitioning – Method 1 – Simple and reasonable for for a single user system

/boot – 250MB
swap – 2G or RAM size
/usr/local (primary) – 4GB
/usr/local (secondary) – 4GB
/root – remaining disk space (kernel and application source will be here)

There is no reason for the /boot partition to be bigger unless you intend to test kernels. The two /usr/local partitions will hold all the binary, include files, libraries, for the non-base portion of the system, so they can be much larger say 8GB each if you have the space.

Partition – Method 2 – Safer

/boot – 250MB
swap - 2GB or RAM size
/usr/local (primary) – 2GB to 8GB
/usr/local (secondary) – 2GB to 8GB
/var – 4GB or 25GB for a large build area
/root – 20GB up to 60GB (kernel source and package source will be here)
/home – remaining disk space

For both methods you could make one /usr/local smaller and the other around 25GB. The larger one (the secondary) could be used for build space for large packages using a symbolic link. This is how I choose to partition. If you are using a 1TB or greater drive you can easily extend all of these sizes by 40%.

What About “Dynamic” Partition Types?

It is possible to use the logical volume manager to build all of these partitions in such a manner that you can resize them on the fly such as shrink one and add the space to another, or even to add another drive and map it’s space in existing partitions. If you know what you are doing and want to do that you certainly can. The LVM is somewhat slower, certainly more complicated (violating TTLP’s KISS principle) but I can understand if you choose to do it. I think with the size of modern drives this isn’t necessary except for servers with large arrays. If you choose to do this and can do it already, then your experienced enough to do it well.

Another choice is some of the newer filesystems that are a volume manager and file system combined into one. We will discuss that next time.

Now It’s Time to Partition

In MX Linux you will find the fdisk and parted programs to handle partitioning. It also allows using a graphical tool if you prefer with gparted. If you have a dedicated drive for The Toucan Project as recommended I suggest you first choose to make a new GPT partition table to make sure everything is cleared. Most disks come setup to operate in Windows with a partition table and file system already installed. Clear it by making a new blank GPT.

Create the partitions first, then set the partition types. Here’s the types as listed by fdisk.



For the swap partition you need code 19 (type 8200 Linux swap). For the Linux partitions you need code 20 (type 8300 Linux filesystem). If you are using an UEFI boot scheme you need to mark the boot partition as code 1 (type ef00). You can mark the /root with code 22 (type 8304 Linux root (x86)) which I recommend. You can mark the /home with code 28 (type 8302) if you’d like. The codes come from fdisk it will be different with a different partitioning tool.

Since we’re building an LFS as our starter system you might want to check the comments in the book regarding partitioning: http://www.linuxfromscratch.org/lfs/view/development/chapter02/creatingpartition.html.

Copyright (C) 2019 by Michael R Stute

Monday, July 29, 2019

6 - Setup, Chroot, Console, and Automation



Toucan Linux Project - 6


A Roll Your Own Distribution

Goal 1 – Base System
Stage 2 – Building the Base System

We now have an intermediate system installed in $LFS/tools that contains a full tool chain to compile all the programs we intend to install in the base system. This system won’t do much more than boot, initialize, and provide simple stand Unix programs for command line use. Our goal of the first application is still angband, a console game that will make good use of the console through the ncurses library. If you’re an angband player you’ll appreciate this, if not, then you might enjoy it once you’ve beat the learning curve. It can at least occupy you will you wait for long compiles on the host. Either way, it is a target that will test the base system.

In this stage we will build all the applications and install them in the directory structure of the host. To do this we will use the chroot command to start a shell that will see the $LFS directory (/mnt/lfs) as the root directory. Within this directory must be all the tools necessary to build the base system and that is what we created in $LFS/tools by creating the intermediate system. After this stage we can even delete the intermediate system if we want (or tar it up as a way to build another system on separate hardware or new partitions.)

Before we can begin we have to setup the directory because we need a working set of devices in /dev and the other virtual filesystems mounted. Since our base filesystem only contains /tools we need to do some configuration work first.

Step 1 – Make Basic Directories
The installation of the various programs will make most of the familiar directories such as /bin, /usr, /lib, and /share. But some directories must be manually made because they are the mount points for the virtual filesystems devfs, tempfs, and proc. Run the following as the root user (unless otherwise noted all commands in this stage are ran by the root user) to create the mount points:

mkdir -pv $LFS/{dev,proc,run,sys}

If you are unfamiliar with the use of braces in bash it creates a list of comma separated elements that are used one at a time with the command. The above is the equivalent of the following:

mkdir -pf $LFS/dev && mkdir -pv $LFS/proc && mkdir $LFS/run && mkdir $LFS/sys

Step 2 – Create Hard Device Nodes
Two devices must be present in the /dev directory in order to boot. They must be device nodes because they are required by the kernel before the device virtual filesystem becomes available (devfs). The devices needed are the console and null. Make them using:

mknod -m 600 $LFS/dev/console c 5 1
mknod -m 600 $LFS/dev/null c 1 3


This will also allow the system to boot with the init=/bin/bash as a way to launch a shell for rescue. In the past each device node would have to be made using mknod and we would need to know the major and minor number of each device and the standard name for it. For disk drives there used to be such names as hda, hdb, as well as the now familiar sda, sdb, etc. Though they look like files they are actually device nodes that use an inode in the filesystem to hold a special file that interfaces with a piece of hardware. To the kernel each device is simply specified as a major number (type of hardware such as “hard drive on the EISA bus”, “hard drive on the SCSI bus,” “floppy disk,” “serial terminal,” “device on the universal serial bus” for example) and the minor number which meant “which one” of those devices, so 0 for the first, 1 for second, 2 for the third for example.
You an see these device numbers by using the following:

$ ls -l /dev
total 0
crw------- 1 root root 10, 58 Jul 23 06:09 acpi_thermal_rel
drwxr-xr-x 2 root root 280 Jul 23 06:08 block
drwxr-xr-x 2 root root 80 Jul 23 06:08 bsg
drwxr-xr-x 3 root root 60 Jul 23 06:08 bus
drwxr-xr-x 2 root root 4060 Jul 23 06:09 char
crw------- 1 root root 5, 1 Jul 23 06:09 console
lrwxrwxrwx 1 root root 11 Jul 23 06:08 core -> /proc/kcore
drwxr-xr-x 2 root root 60 Jul 23 06:08 cpu
crw------- 1 root root 10, 62 Jul 23 06:09 cpu_dma_latency
drwxr-xr-x 7 root root 140 Jul 23 06:08 disk
drwxr-xr-x 3 root root 100 Jul 23 06:09 dri
crw-rw---- 1 root video 29, 0 Jul 23 06:09 fb0
lrwxrwxrwx 1 root root 13 Jul 23 06:08 fd -> /proc/self/fd
crw-rw-rw- 1 root root 1, 7 Jul 23 06:09 full
crw-rw-rw- 1 root root 10, 229 Jul 23 06:09 fuse
crw------- 1 root root 248, 0 Jul 23 06:09 hidraw0
crw------- 1 root root 248, 1 Jul 23 06:09 hidraw1
crw------- 1 root root 248, 2 Jul 23 06:09 hidraw2
crw------- 1 root root 248, 3 Jul 23 06:09 hidraw3
crw------- 1 root root 10, 228 Jul 23 06:09 hpet
crw------- 1 root root 240, 0 Jul 23 06:09 iio:device0
prw------- 1 root root 0 Jul 23 06:08 initctl
drwxr-xr-x 4 root root 600 Jul 23 06:09 input
crw-r--r-- 1 root root 1, 11 Jul 23 06:09 kmsg
srw-rw-rw- 1 root root 0 Jul 23 06:09 log
drwxr-xr-x 2 root root 60 Jul 23 06:08 mapper
crw------- 1 root root 10, 227 Jul 23 06:09 mcelog
crw-rw---- 1 root video 241, 0 Jul 23 06:09 media0
crw-rw---- 1 root video 241, 1 Jul 23 06:09 media1
crw------- 1 root root 245, 0 Jul 23 06:09 mei0
crw-r----- 1 root kmem 1, 1 Jul 23 06:09 mem


With ls -l where the size and modified date of a file is printed with a device node, ls ouputs the major and minor number. At the top of my list was a number of HID (human input devices) devices, I cut so we could see better. Note the last one is device 1,1 which is kernel memory – even memory looks like a file for reading.

The Console
The console is special because when Unix was first created there was a single serial device that the kernel should be guaranteed to be able to output text. This was known as “the console” and if the kernel was booting and experienced an error, the clues to fixing that error would be printed to the console—and yes I mean printed. Most hardware had a special serial port for connecting just the console. In the early days, as seen in this very famous picture of Ken Thompson and Dennis Ritchie, 

it was a teletype, a type of terminal that output lines of text on a roll of paper and had a keyboard for input that, as the operator typed, also printed that input. This device is major number 5, minor number 1. As you can see above when the device node was created it was specified as a character type (“c”), major number 5, and minor number 1. For Linux that is the virtual terminal that looks like a text screen. You can see it by pressing “CTRL-ALT-F1” but before you do you’ll need to press “CTRL-ALT-F7” to return (though this might be different on various distros so if CTRL-ALT-F7 doesn’t work, try CTRL-ALT-F9 and move backwards through the function keys). If you do this you’ll probably see some text above a login prompt. I am guessing if you are attempting TTLP you know this, but just in case so very brave souls are following along I’ve decided to cover this.

This is Linux’s version of the console. It assumes there must be a video card of some sort attached to the system (it can boot without one though in “blind mode”) and the console code in the kernel will create a virtual terminal using the video card. It is most likely driven by what is called a “frame buffer” for most systems but in older times if the video circuitry could only provide an 80x25 text screen Linux would support that.

Take a look at the /etc/initab from the last article. Right in the middle you’ll see this:

1:2345:respawn:/sbin/agetty --noclear tty1 9600
2:2345:respawn:/sbin/agetty tty2 9600
3:2345:respawn:/sbin/agetty tty3 9600
4:2345:respawn:/sbin/agetty tty4 9600
5:2345:respawn:/sbin/agetty tty5 9600
6:2345:respawn:/sbin/agetty tty6 9600 



That sections tells init to run a program called agetty located in the secure bin directory (/sbin) with some parameters. You’ll see that in my case, there are six of them and this is fairly standard. Each specifies a device using “tty” which standards for “teletype.” The program agetty is a simple program that opens a tty port, prints the contents of a file called /etc/issue, and prompts and then runs the /bin/login command which handles whatever is used for a text mode command line login procedure. The above lines creates six terminals which Linux maps to the virtual terminals and uses the CTRL-ALT-F1 – F6. Here’s the listing of my tty first 10 tty devices:

$ ls -l /dev/tty?
crw--w---- 1 root tty 4, 0 Jul 23 06:09 /dev/tty0
crw--w---- 1 root tty 4, 1 Jul 23 06:09 /dev/tty1
crw--w---- 1 root tty 4, 2 Jul 23 06:09 /dev/tty2
crw--w---- 1 root tty 4, 3 Jul 23 06:09 /dev/tty3
crw--w---- 1 root tty 4, 4 Jul 23 06:09 /dev/tty4
crw--w---- 1 root tty 4, 5 Jul 23 06:09 /dev/tty5
crw--w---- 1 root tty 4, 6 Jul 23 06:09 /dev/tty6
crw--w---- 1 mstute mstute 4, 7 Jul 23 06:09 /dev/tty7
crw--w---- 1 root tty 4, 8 Jul 23 06:09 /dev/tty8
crw--w---- 1 root tty 4, 9 Jul 23 06:09 /dev/tty9

As you can see the root user owns them all but device 7 (tty7) which is owned by me since this is my system and I am logged in. For you it will be the user name you used to login into your system. For MK Linux it will be “demo.” When X Windows starts (the graphical component of Unix) it needs a terminal as well and it will start by using tty7 which is mapped to CTRL-ALT-F7. When I login through X I became the owner of the tty device so I can read and write to it. Some distros of Linux will bring up fewer virtual terminals and possibly start X on a lower number, but most versions of Linux will default to /dev/tty7 for X. For X this device is still used for keyboard (and mouse indirectly) input. Like all things in Linux this can be configured so we might as well follow it to the bottom. On my TTLP system I use SLiM as the X display manager.

A display manager handles multiple X window displays using a protocol called X Display Manger Control Protocol. It’s job is similar to init, agetty, and login for text terminals. It should prompt for a login and password (other other authentication type), authenticate the user, and then start a session, In command line mode the initial shell (/bin/sh) is considered the session process (lifetime of process – when it goes away the user is considered logged out and the OS can cull all the resources.) For X this is generally a window manager or, in the past, a terminal emulator. When you close this process, your session is over and you are logged out. Examples of of display managers are LightDM, XDM, SLiM, SDDM, and GDM. I use SLiM and the configuration file to select the display manager is /etc/conf.d/xdm. It’s contents are:

# We always try and start X on a static VT. The various DMs normally default
# to using VT7. If you wish to use the xdm init script, then you should ensure
# that the VT checked is the same VT your DM wants to use. We do this check to
# ensure that you haven't accidentally configured something to run on the VT
# in your /etc/inittab file so that you don't get a dead keyboard.
CHECKVT=7


# What display manager do you use ? [ xdm | gdm | sddm | gpe | lightdm | entrance ]
# NOTE: If this is set in /etc/rc.conf, that setting will override this one.
DISPLAYMANAGER="slim" 


This shows that the display manager will by SliM but it also shows why tty7 is the one chosen. The CHECKVT=7 tells which virtual terminal to try as the first available. If a distro only uses three virtual terminals, this might be set to four and CTRL-ALT-F4 would be the X window session.

Step 3 – Mount virtual filesystems
If you can’t do all the following steps in one session, that is okay. But you’ll need to use the chroot command to enter into the host and remount the virtual filesystems each time. This is necessary because some of the commands depend on them and some configuration scripts use information published in /sys and /proc to determine how to configure the software for the system. Create a script $LFS/sources to mount them:

cat > $LFS/sources/mount_vfs << EOF
mount -v --bind /dev $LFS/dev
mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
EOF



Make it executable:

chmod 750 $LFS/sources/mount_vfs


and finally run it:

$LFS/sources/mount_vfs


If you shutdown and resume this stage later, you will need to execute that script after mounting the target’s filesystem on the host.

Step 4 – Chroot Into the Target
Now we began building the software of the system itself. You will need the network working on the host before you run this command. Until the network is fully up on the base system (many steps from now) we’ll need to use the network of the host as the network for the target.
Make another script to handle the change of root to enter into the target’s filesystem:

cat << EOF > $LFS/sources/chg2tgt
chroot $LFS /tools/bin/env -i \
HOME=/root TERM=$TERM PS1=’(ttlp chroot) \u:\w\ $’ \
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin \
/tools/bin/bash –login +h
EOF
chmod 750 $LFS/sources/ch2tgt



This uses the -i option of env command to clear all variables from the environment and then populate it only with the ones we want: HOME, TERM, PS1, and PATH. Notice that /tools/bin is now last in the path. As the new commands are built they will be installed in /bin, /sbin, /usr/bin, and /usr/sbin and we want the shell to start using them. Since bash will remember the location of commands and not search the path again we use the +h option to turn off hashing. This ensures it will search the PATH to find executables and find the new ones as we create them.

Now ran the chroot script:

$LFS/sources/ch2tgt 


We are now inside the target using the intermediate tools installed in /tools. All commands now will operate only on the target and not the host. It is also fully dependent on the host. To build that target the same process that built the intermediate system is used, namely unarchive a package, configure it, compile it, and install it. It is tedious and for LFS it is recommended that you perform each one manually as part of the learning process.

But in The Toucan Linux Project we are more interested in getting our base system complete so we can begin the greater work of extending it to a more useable system. For this reason we will write a bit of code to do the bulk of the work for most of the packages, leaving the remaining ones to do manually. Later we will extend this work into a package manager that we can use to maintain the system and compile everything with more advanced optimization options. Lastly, we’ll use a system of two partitions to allow experiments to be performed safely.

Step 5 – Setting Up the Target
We need to do some preparation work before we begin. The following commands will create the basic directory structure on the host:

mkdir -pv /{bin,boot,etc/{opt,sysconfig},home,lib/firmware,mnt,opt}
mkdir -pv /{media/{floppy,cdrom},sbin,srv,var}
install -dv -m 0750 /root
install -dv -m 1777 /tmp /var/tmp
mkdir -pv /usr/{,local/}{bin,include,lib,sbin,src}
mkdir -pv /usr/{,local/}share/{color,dict,doc,info,locale,man}
mkdir -v /usr/{,local/}share/{misc,terminfo,zoneinfo}
mkdir -v /usr/libexec
mkdir -pv /usr/{,local/}share/man/man{1..8}
mkdir -v /usr/lib/pkgconfig


case $(uname -m) in
x86_64) mkdir -v /lib64 ;;
esac


mkdir -v /var/{log,mail,spool}
ln -sv /run /var/run
ln -sv /run/lock /var/lock
mkdir -pv /var/{opt,cache,lib/{color,misc,locate},local} 


This is directly from LFS. Unlike LFS we will make heavy use of /usr/local and almost no use of /opt. In TTLP /opt is the place were bad programs go, those that don’t like to play nice with others, kind of like jail.

Step 6 – Additional Configuration
Notice that bash doesn’t have a name for the user in the prompt, it says “I have no name!”. That’s because there is no user database. We have some more work to do, and it is contained in LFS
It is replicated here to make matters easier. Be sure to check the link for the reasoning behind this work, it is basic, but essential, in nature.

Make essential links:

ln -sv /tools/bin/{bash,cat,chmod,dd,echo,ln,mkdir,pwd,rm,stty,touch} /bin
ln -sv /tools/bin/{env,install,perl,printf} /usr/bin
ln -sv /tools/lib/libgcc_s.so{,.1} /usr/lib
ln -sv /tools/lib/libstdc++.{a,so{,.6}} /usr/lib


ln -sv bash /bin/sh
ln -sv /proc/self/mounts /etc/mtab 


Create the user database:

cat > /etc/passwd << "EOF"
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/dev/null:/bin/false
daemon:x:6:6:Daemon User:/dev/null:/bin/false
messagebus:x:18:18:D-Bus Message Daemon User:/var/run/dbus:/bin/false
nobody:x:99:99:Unprivileged User:/dev/null:/bin/false
EOF



Create the groups file (we’ll stick with the old school standards):

cat > /etc/group << "EOF"
root:x:0:
bin:x:1:daemon
sys:x:2:
kmem:x:3:
tape:x:4:
tty:x:5:
daemon:x:6:
floppy:x:7:
disk:x:8:
lp:x:9:
dialout:x:10:
audio:x:11:
video:x:12:
utmp:x:13:
usb:x:14:
cdrom:x:15:
adm:x:16:
messagebus:x:18:
input:x:24:
mail:x:34:
kvm:x:61:
wheel:x:97:
nogroup:x:99:
users:x:999:
EOF



Restart bash to allow it to find the user name:


exec /tools/bin/bash --login +h

Lastly, create some log files the system startup programs need:

touch /var/log/{btmp,lastlog,faillog,wtmp}
chgrp -v utmp /var/log/lastlog
chmod -v 664 /var/log/lastlog
chmod -v 600 /var/log/btmp 


Step 7 – Automating the Build Procedure
A lot of this packages will use the same basic procedure for installation that can be repeated. To automate those we’ll create a basic script to handle the bulk of the work. We’ll build all packages of the LFS system and add a few of our own. These will forever be considered the “base” system and contain the tools we need as a minimum set of tools to maintain the system. They will be installed in the root under /bin, /sbin, /usr/bin, and /usr/sbin. Libraries will be installed in /lib and /usr/lib, shared data to /usr/share, and base configurations to /etc. The LFS book will recommend using the test procedures of the packages to verify they work, usually with make check or make test. For some packages this is accurate, but since we are building a system and many packages think they are building on a complete system not a partial system, many tests will fail. Some of them take a lot of time. There are some that running the tests is essential. The ones that are important I will point out here t be sure you take the time, the others you can skip if you want though I recommend you do them. Since part of building a high performance experimental system is running the tests after compilation, we’ll need to do that as we test various options. If you ran all the tests to be sure the base is stable, it’s a good foundation for the experimental work. However, since we are using the experimental version of LFS be prepared for some tests that fail that are not noted in the LFS book.

The basic idea to build each package in six steps:
1) Unarchive it
2) Configure it
3) Build it with make
4) Install it, usually with make install
5) Test the install
6) Clean up

You can use the some aliases we created before: ut and del if you choose to do this manually. They will save you some keystrokes.

For now, we’ll build a small Perl script that will do the bulk of the work. This will include running the checks. Later we can use this code in the package manager. The theory of operation is to have a standard configuration script that will do the basic configuration as many programs in the base system as possible which is simply to change the install prefix to /usr since by convention it will pick /usr/local/.

./configure –prefix=/usr 


Some packages require a separate build directory, most do not. This part of the process is generally just

make install 


Others will need some tweaking on the tests which is generally:

make check


Yet others require a bit of work on the install, though most of the time it is

make install


Because there are so may variations, we will have to create a file for each package that requires differences. This is okay, because in our package manager we want absolute control over each and every package anyway, right down the the compiler options used and even how to link it. But we’ll have default scripts for those that don’t need changes.

Start by making four default scripts in a directory underneath the root. We’ll name this directory cci for configure-compile-install:

mkdir ~/cci
cd ~/cci



Now we’ll make four files to contain the default commands:

echo “./configure –prefix=/usr” > config
echo ‘make -j $CCI_CPUS’ > compile
echo “make check” > test
echo “make install” > install
echo “# NOP” > null 


That gives us four files named config, compile, test, and install. The compile file uses an environment variable called CCI_CPUS which should be set to the number of processors on the system. Let’s add that to root’s configuration:

cd
echo “export CCI_CPUS=`cat /proc/cpuinfo |grep vendor | wc -l`” >> .bashrc
cat .bashrc



At the end of .bashrc you’ll find a line that exports CCI_CPUS equal to the number of processors on your system. If for any reason you don’t want to use all the processors to compile, then change this to a lower number.

Now we’ll start our Perl script to do this work. It starts with a Perl standard to require strict variable scoping to ensure we specify the scope of our variables. It will also make sure we don’t mistype a variable name somewhere in the program.
The program will accept four arguments that will override the default configuration. It will use -c for the configuration file, -m for the compile (make) file, -t for the test file, and -i for the install file. If these aren’t specified it will use the default files automatically so that we need to create as few extra files as possible. When we are done we’ll have every thing we need (the data) to drive our package manager version 1.0. Here’s the script so far:

1 #!/tools/bin/perl
2
3 use strict;
4 use Getopt::Std;
5 use File::Copy;
6
7 my %opts;
8
9 #Get command line arguments
10 getopts("p:c:m:t:i:n:",\%opts) or die("Options are invalid\n");



There isn’t any need to copy and paste this as the final file will be made at the end. These snippets are simply for discussion and may still be of use to you even if you aren’t a Perl programmer. Line 1, as with any shell script, tells what program to use to interpret the script. This would normally be /usr/bin/perl but our Perl interpreter is currently installed in /tools/bin in the intermediate system. Line 3 sets up the strict requirements for references, variables, and subroutines. Line 4 sets up the standard Perl command line parser. Line imports the File::Copy (and move) routines. Line 7 declares a hash called %opts and, finally, line 10 uses the getopts subroutine to populate the %opts hash with the options on the command line.

Next we declare variables and set paths.

12 # Declare variables and set paths
13 my $package;
14 my $srcs="/sources";
15 my $build_dir="/$srcs/build";
16 my $config_dir="$srcs/configs";
17 my ($config,$test,$compile,$install);
18 my $BIN="/tools/bin";
19 my $name=$opts{n}; 


The $package variable will hold the filename of the tarball, which is are package. The $srcs variable is the path to the directory containing the source tarballs. The $build_dir is where the tarball will be exploded and the code will be compiled (this will be delete when we are done.) Underneath the source directory we will create a directory called /configs which will contain the various configuration files we will need for each package. Line 17 declares four variables. The $config variable will contain the name of the configuration file, the $test variable will contain the name of the test file, the $compile variable will contain the name of the compile file, and the $install variable will contain the name of the install file (this will all become clear shortly.) Last we’re creating a variable called $BIN to hold the path for where the executable files, if you wish to make the script more secure, out $BIN in front of all the commands used in system(). In this intermediate script we will use the perl system() function to executable commands we need in a shell. This isn’t a great practice unless you specifying exactly which by using an absolute path. We can’t do this now because we want the system to use the new programs as we make this. This is an insecure practice, but many things about building a system are insecure. If this is a great concern, disable the network will you work. We also will have to use the system() for certain functions we don’t have available in native Perl because our installation is very basic.

Another bad coding practice regarding security is to read data from an external source, such as a file, and pass it to the system() function because we aren’t sure of the contents and it can be used to inject a command. For instance, say you have a file that contains the name of directories you wish to print, one per line. It might look like this:

/bin
/usr/bin
/usr/local/bin



If the program opened the file and used the ls command through the shell get the file list you could have a problem. Suppose we put the name of the directory into a variable called $dir and did he following to capture the output:

my $var=`ls $dir`;



Seems safe enough but suppose the input file contained:

/bin && useradd hacker -p “hashedpassword”


where hashedpassword was a valid password hash then the program would create a new user with a known password if ran as root. Be aware this script is not designed for anything but our first system build. We will delete it later.
Prepare by running a few commands:

mkdir -p /sources/{configs,build}
cd ~/cci
mv config install compile test /sources/configs



To speed up the build process, we can compile on a ram disk. The virtual filesytem /tmpfs will serve our purposes nicely. Create one and mount it on /sources/build using the following:

mount -o size=250M -t tmpfs tmpfs /sources/build

This will create a ram disk that will, at most, use 250MB of space. It will automatically allocate and shrink its memory usage as we add and delete files. That is sufficient in size for anything we want to compile.
Back to the listing. The subroutine setup() is called at line 22 which will determine which scripts to use. The results are printed to the terminal for viewing.

21 ###### Main routine
22 setup();
23
24 print "Using the following to build $package:\n";
25 print " config = $config\n";
26 print " compile = $compile\n";
27 print " test = $test\n";
28 print " install = $install\n";



Now we reach the heart of the process.

30 # Explode the tar
31 my $dir=untar($package,$build_dir);
32 chdir($dir);
33 # Make the build script
34 my $script=make_script($config,$compile,$test,$install);
35 open (OUT,">cci_build") or fail();
36    print OUT $script;
37 close(OUT);
38 print "Build script is:\n$script\n";
39
40 # Execute it
41 system("bash cci_build | tee ../$package.log");
42
43 # Clean up
44 system("rm -rf $dir"); 


Line 31 calls a subroutine to explode the tar creating it in the building directory. It also returns the name of the directory created which is stored in $dir. Line 32 changes the current directory to the build directory. Line 33 uses the subroutine make_script() to return the build script using the config, compile, test, and install scripts. Lines 35 to 37 open and create the build script in the build directory, called cci_build and the contents are also printed to the terminal. At line 41 bash is called to run the script (it is not marked executable) with the results piped through tee to create a log file while printing it to the screen. Lastly, line 44 uses the rm command to delete the build directory. Lines 49 through 65 (shown in the complete listing below) create a directory under the configs directory with the package name and moves any custom scripts into the directory. These will be needed later for the full package manager. This occurs only if we provide the -n option to give the package a name.

The make_script() subroutine is passed in the four scripts as determined by the setup() subroutine, reads each in, appends it to the last, and returns. This will be the contents of the build script. The read is handled by a subroutine that uses Perl’s slurp mode to read the complete contents of a file ignoring newlines (see lines 95 – 101 in the complete listing).

86 sub make_script {
87    my $config=shift;
88    my $compile=shift;
89    my $test=shift;
90    my $install=shift;
91
92    my $script=read_in($config);
93    my $contents=read_in($compile);
94    $script.=$contents;
95    my $contents=read_in($test);
96    $script.=$contents;
97    my $contents=read_in($install);
98    $script.=$contents;
99    return($script);
100 }



The tarball is exploded using a call to tar. It is passed the name of the tarball and the build directory. Line 109 builds the command line. We want it to use the intermediate tar command until tar appears in the /bin directory later so we can’t use the $BIN variable yet. It changes to the build directory at line 127 and executes the command at line 129. To derive the package name a regular expression is used to replace the .tar.* extension with a null string. This leaves the package name and version.

121 sub untar {
122    my $package=shift;
123    my $build_dir=shift;
124
125    my $cmd="tar -xf $srcs/$package";
126    print " Changing to $build_dir\n";
127    chdir($build_dir);
128    print " Executing $cmd...\n";
129    system($cmd);
130    # Return the directory that was created
131    my $temp=$package;
132    $temp =~ s/.tar.*$//;
133    return("$build_dir/$temp");
134 }



The last bit we are interested in is the setup() subroutine. This will check the command line options for each of the four parts of the build script (config, compile, test, install) to see if they are present. If they are it will expect to find them in the configs directory. If not, it will use the default name from the configs directory. The subroutine first checks that a package file has been given with the -p option and that it exists (lines 140 – 150) and that the build directory exists (lines 152 – 156). The -c option is checked at line 159 and if present the argument is checked to be sure it is a readable file in the configs directory. If it isn’t readable we stop on error at line 160. The config script name is set to the default at 164 but if the -c option is present we set the config script to config at line 165. The same login is used for the -m option (lines 168 – 175), the -t option (lines 168 – 175) and the -i option at lines (186 – 193).

138 sub setup {
139    # Check we have a package
140    if(!$opts{p}) {
141       print "A package (tarball) must be specified\n";
142    exit;
143    }
144    $package=$opts{p};
145
146    # Check the package exists
147    if (! -f "$srcs/$package") {
148       print "Package $package not found in $srcs\n";
149       exit;
150    }
151
152    # Check that the build dir exists
153    if (! -d "$build_dir" ) {
154       print "The build directory is invalid: $build_dir\n";
155       exit;
156    }
157
158    #Verify any supplied scripts are indeed readable
159    if ($opts{c} && (! -f "$config_dir/$opts{c}")) {
160       print "Configure script $config_dir/$opts{c} doesn't exist\n";
161    exit;
162    }
163    else {
164       $config="config";
165       $config="$opts{c}" if($opts{c});
166    }
167
168    if ($opts{m} && (! -f "$config_dir/$opts{m}")) {
169       print "Compile script $config_dir/$opts{m} doesn't exist\n";
170       exit;
171    }
172    else {
173       $compile="compile";
174       $compile="$opts{m}" if($opts{m});
175    }
176
177    if ($opts{t} && (! -f "$config_dir/$opts{t}")) {
178       print "Test script $config_dir/$opts{t} doesn't exist\n";
179       exit;
180    }
181    else {
182       $test="test";
183       $test="$opts{t}" if($opts{t});
184    }
185
186    if ($opts{i} && (! -f "$config_dir/$opts{i}")) {
187       print "Install script $config_dir/$opts{i} doesn't exist\n";
188       exit;
189    }
190    else {
191       $install="install";
192       $install="$opts{i}" if($opts{i});
193    }
194 } 


The complete listing:

1 #!/tools/bin/perl
2
3 use strict;
4 use Getopt::Std;
5 use File::Copy;
6
7 my %opts;
8
9 #Get command line arguments
10 getopts("p:c:m:t:i:n:",\%opts) or die("Options are invalid\n");
11
12 # Declare variables and set paths
13 my $package;
14 my $srcs="/sources";
15 my $build_dir="/sources/build";
16 my $config_dir="$srcs/configs";
17 my ($config,$test,$compile,$install);
18 my $BIN="/tools/bin";
19 my $name=$opts{n};
20
21 ###### Main routine
22 setup();
23
24 print "Using the following to build $package:\n";
25 print " config = $config\n";
26 print " compile = $compile\n";
27 print " test = $test\n";
28 print " install = $install\n";
29
30 # Explode the tar
31 my $dir=untar($package,$build_dir);
32 chdir($dir);
33 # Make the build script
34 my $script=make_script($config,$compile,$test,$install);
35 open (OUT,">cci_build") or fail();
36    print OUT $script;
37 close(OUT);
38 print "Build script is:\n$script\n";
39
40 # Execute it
41 system("bash cci_build | tee ../$package.log");
42
43 # Clean up
44 system("rm -rf $dir");
45
46 # If name is set create the directory for package
47 # and move in the scripts into it for
48 # future use
49 if($name) {
50    print "Changing to $srcs\n";
51    chdir($srcs);
52    system("mkdir $name");
53    if($opts{c}) {
54       if($opts{c} eq "null") {
55          copy("$config_dir/$config" ,"$name/config");
56       } else {
57          move("$config_dir/$config" ,"$name/config");
58       }
59    }
60 if($opts{m}) {
61    if($opts{m} eq "null") {
62       copy("$config_dir/$compile","$name/compile");
63    } else {
64       move("$config_dir/$compile","$name/compile");
65    }
66 }
67 if($opts{t}) {
68    if($opts{t} eq "null") {
69       copy("$config_dir/$test","$name/test");
70    } else {
71       move("$config_dir/$test","$name/test");
72    }
73 }
74 if($opts{i}) {
75    if($opts{i} eq "null") {
76       copy("$config_dir/$install","$name/install");
77    } else {
78       move("$config_dir/$install","$name/install");
79    }
80  }
81 }
82 exit;
83
84 # Make the build script by reading in all files
85 # and appending in execution order
86 sub make_script {
87    my $config=shift;
88 
   my $compile=shift; 
89    my $test=shift; 
90    my $install=shift; 
91
92 
   my $script=read_in($config); 
93    my $contents=read_in($compile); 
94    $script.=$contents; 
95    my $contents=read_in($test); 
96    $script.=$contents; 
97    my $contents=read_in($install); 
98    $script.=$contents; 
99    return($script); 
100 }
101
102 # Fail but clean up
103 sub fail
104 {
105    system("rm -rf $dir") if($dir)
106    print "Fail\n";
107    exit;
108 }
109
110 # Return the complete contents of a file
111 sub read_in
112 {
113    local $/ = undef;
114    open my $fh, "$config_dir/$_[0]" or die "Can't open $_[0]: $!";
115    my $slurp = <$fh>;
116    return $slurp;
117 }
118
119 # Untar the package in the build area and return the
120 # directory it creates
121 sub untar {
122    my $package=shift;
123    my $build_dir=shift;
124
125    my $cmd="$BIN/tar -xf $srcs/$package";
126 
   print " Changing to $build_dir\n"; 
127    chdir($build_dir); 
128    print " Executing $cmd...\n"; 
129    system($cmd); 
130    # Return the directory that was created 
131    my $temp=$package; 
132    $temp =~ s/.tar.*$//; 
133    return("$build_dir/$temp"); 
134 }
135
136 # Sanity check the command line
137 # Determine scripts to use
138 sub setup {
139 
   # Check we have a package 
140    if(!$opts{p}) { 
141       print "A package (tarball) must be specified\n"; 
142       exit; 
143    
144    $package=$opts{p}; 
145
146 
   # Check the package exists 
147    if (! -f "$srcs/$package") { 
148       print "Package $package not found in $srcs\n"; 
149       exit; 
150    
151
152 # Check that the build dir exists
153 
   if (! -d "$build_dir" ) { 
154       print "The build directory is invalid: $build_dir\n"; 
155       exit; 
156    
157
158 
   #Verify any supplied scripts are indeed readable 
159    if ($opts{c} && (! -f "$config_dir/$opts{c}")) { 
160       print "Configure script $config_dir/$opts{c} doesn't exist\n"; 
161       exit; 
162    
163    else { 
164       $config="config"; 
165       $config="$opts{c}" if($opts{c}); 
166    
167
168 
   if ($opts{m} && (! -f "$config_dir/$opts{m}")) { 
169       print "Compile script $config_dir/$opts{m} doesn't exist\n"; 
170       exit; 
171    
172    else { 
173       $compile="compile"; 
174       $compile="$opts{m}" if($opts{m}); 
175    
176
177 i
   f ($opts{t} && (! -f "$config_dir/$opts{t}")) { 
178       print "Test script $config_dir/$opts{t} doesn't exist\n"; 
179       exit; 
180    } 
181    else { 
182       $test="test"; 
183       $test="$opts{t}" if($opts{t}); 
184    } 
185
186 
   if ($opts{i} && (! -f "$config_dir/$opts{i}")) { 
187       print "Install script $config_dir/$opts{i} doesn't exist\n"; 
188       exit; 
189    
190    else { 
191       $install="install"; 
192    $install="$opts{i}" if($opts{i}); 
193    
194 }

Run the following to make the script in the ~/cci directory.

cat > cci.pl << "EOF"
#!/usr/bin/perl

use strict;
use Getopt::Std;
use File::Copy;

my %opts;

#Get command line arguments
getopts("p:c:m:t:i:n:",\%opts) or die("Options are invalid\n");

# Declare variables and set paths
my $package;
my $srcs="/sources";
my $build_dir="/sources/build";
my $config_dir="$srcs/configs";
my ($config,$test,$compile,$install);
my $BIN="/tools/bin";
my $name=$opts{n};

$srcs="/usr/src/paccom/realms/base/sources";
$build_dir="/var/paccom/build";
$BIN="/bin";
$config_dir="/usr/src/paccom";

###### Main routine
setup();

print "Using the following to build $package:\n";
print "   config  = $config\n";
print "   compile = $compile\n";
print "   test    = $test\n";
print "   install = $install\n";

# Exlode the tar
my $dir=untar($package,$build_dir);
chdir($dir);
# Make the build script
my $script=make_script($config,$compile,$test,$install);
open (OUT,">cci_build") or fail();
print OUT $script;
close(OUT);
print "Build script is:\n$script\n";

# Execute it
print  "bash cci_build | tee ../$package.log";
system("bash cci_build | tee ../$package.log");

# Clean up
system("rm -rf $dir");

# If name is set create the directory for package
# and move in the scripts into it for
# future use
if($name) {
   print "Changing to $srcs\n";
   chdir($srcs);
   system("mkdir $name");
   if($opts{c}) {
      if($opts{c} eq "null") {
         copy("$config_dir/$config" ,"$name/config");
      } else {
         move("$config_dir/$config" ,"$name/config");
      }
   }
   if($opts{m}) {
      if($opts{m} eq "null") {
         copy("$config_dir/$compile","$name/compile");
      } else {
         move("$config_dir/$compile","$name/compile");
      }
   }
   if($opts{t}) {
      if($opts{t} eq "null") {
         copy("$config_dir/$test","$name/test");
      } else {
         move("$config_dir/$test","$name/test");
      }
   }
   if($opts{i}) {
      if($opts{i} eq "null") {
         copy("$config_dir/$install","$name/install");
      } else {
         move("$config_dir/$install","$name/install");
      }
   }
}
exit;

# Make the build script by reading in all files
# and appending in execution order
sub make_script {
   my $config=shift;
   my $compile=shift;
   my $test=shift;
   my $install=shift;

   my $script=read_in($config);
   my $contents=read_in($compile);
   $script.=$contents;
   my $contents=read_in($test);
   $script.=$contents;
   my $contents=read_in($install);
   $script.=$contents;
   return($script);
}

# Fail but clean up
sub fail
{
   system("rm -rf $dir") if($dir)
   print "Fail\n";
   exit;
}

# Return the complete contents of a file
sub read_in
{
    local $/ = undef;
    open my $fh, "$config_dir/$_[0]" or die "Can't open $_[0]: $!";
    my $slurp = <$fh>;
    return $slurp;
}

# Untar the package in the build area and return the
# directory it creates
sub untar {
   my $package=shift;
   my $build_dir=shift;

   my $cmd="$BIN/tar -xf $srcs/$package";
   print "   Changing to $build_dir\n";
   chdir($build_dir);
   print "   Executing $cmd...\n";
   system($cmd);
   # Return the directory that was created
   my $temp=$package;
   $temp =~ s/.tar.*$//;
   return("$build_dir/$temp");
}

# Sanity check the command line
# Determine scripts to use
sub setup {
   # Check we have a package
   if(!$opts{p}) {
      print "A package (tarball) must be specified\n";
      exit;
   }
   $package=$opts{p};

   # Check the package exists
   if (! -f "$srcs/$package") {
      print "Package $package not found in $srcs\n";
      exit;
   }

   # Check that the build dir exists
   if (! -d "$build_dir" ) {
      print "The build directory is invalid: $build_dir\n";
      exit;
   }

   #Verify any supplied scripts are indeed readable
   if ($opts{c} && (! -f "$config_dir/$opts{c}")) {
      print "Configure script $config_dir/$opts{c} doesn't exist\n";
      exit;
   }
   else {
      $config="config";
      $config="$opts{c}" if($opts{c});
   }

   if ($opts{m} && (! -f "$config_dir/$opts{m}")) {
      print "Compile script $config_dir/$opts{m} doesn't exist\n";
      exit;
      }
   else {
      $compile="compile";
      $compile="$opts{m}" if($opts{m});
   }

   if ($opts{t} && (! -f "$config_dir/$opts{t}")) {
      print "Test script $config_dir/$opts{t} doesn't exist\n";
      exit;
   }
   else {
      $test="test";
      $test="$opts{t}" if($opts{t});
   }

   if ($opts{i} && (! -f "$config_dir/$opts{i}")) {
      print "Install script $config_dir/$opts{i} doesn't exist\n";
      exit;
   }
   else {
      $install="install";
      $install="$opts{i}" if($opts{i});
   }
}
EOF

Or you can choose to download it because of the formatting issues blogger has with code.

cci.pl

With the intermediate build script built, we are ready to start building the system in earnest. We will use LFS for the guide, but instead of executing the commands, we’ll create the required scripts and use the cci.pl Perl program to do the work. We’ll being this work, next time.

Copyright (C) 2019 by Michael R Stute