Friday, August 30, 2019

10 - Final Configuration, Kernel, and Bootloader

Toucan Linux Project - 10
Configuration, Kernel, and Bootloader


A Roll Your Own Distribution
Goal 1 – Base System


Stage 2 – Building the Base System

In this article we will only do three steps. Creating the file system directory hierarchy, making the operating system kernel, and installing grub to make target system bootable for legacy-style booting. Because UEFI is involved and much more difficult we will perform that task as a separate article.

We should start in the host (MK Linux) with the target's filesystem mounted under $LFS and used chroot to change to the target (chroot_to_lfs).

Step 1- The Filesystem Table

Upon booting the boot loader will load the boot partition and pass parameters to the kernel to inform it which partition is the root. The kernel will mount the root partition but not the others as there is no way for it to determine the mount points. The mountfs script, as ran by rc will mount the other partitions. It expects to find a file called /etc/fstab that contains the filesystems and their respective mount points. This has been such a standard of Unix for so long many other programs will expect to find the same file.

First create the beginning /etc/fstab used for all partitions schemes. This contains the virtual filesystems of the kernel.

cat > /etc/fstab << "EOF"
# Begin /etc/fstab

# file system  mount-point  type     options             dump  fsck
#                                                              order

proc           /proc     proc     nosuid,noexec,nodev 0     0
sysfs          /sys      sysfs    nosuid,noexec,nodev 0     0
devpts         /dev/pts  devpts   gid=5,mode=620      0     0
tmpfs          /run      tmpfs    defaults            0     0
devtmpfs       /dev      devtmpfs mode=0755,nosuid    0     0
EOF

The columns in the file are filesystem device (file system), the mount point, the filesystem type, mount options, dump flag (an older file system backup utility), and the pass number in which to run the filesystem checks. The last field indicates the order to check the filesystems as the root must be clean and mounted before the others, and some filesystems need to checked before yet others. Since XFS cleans on mount this isn't much of a problem, but if a major crash occurs we might need xfs_repair so the check order still might be used. However, in the twenty years I have used XFS filesystems I believe I have used xfs_repair less than ten times, and only then on older, much less reliable, hard drives.

We have the mount.sh script we created back in the fourth chapter that can serve as the guide for the fstab file. Let's review it. Since this file is outside the target we need to run the following in the host (MX Linux)

$ cat ~/lfs/mount.sh
mkdir -pv $LFS
mount -v /dev/<sdX> $LFS
mkdir -v $LFS/boot
mount -v /dev/<sdY> $LFS/boot

This shows the mount points (the mkdir made the mount points) and the partitions mounted at each. The above is the simple root/boot scheme from method 1.

Root/Boot

For this you only have three devices: the root device, the boot device, and the swap (assuming you created one). The entries for fstab look like this

/dev/<xxx> /     xfs  noatime          1     1
/dev/<yyy> /boot vfat nosuid,noexec,noauto  1     2
/dev/<zzz> swap  swap pri=1            0     0

Replace <xxx> with the root partition device, <yyy> with the boot partition device, and <zzz> with the swap partition. Either copy the following and modify it on the command line or paste it in an editor and copy it again

cat >> /etc/fstab << "EOF"
/dev/<xxx>    /          xfs      noatime              1     1
/dev/<yyy>    /boot      vfat     nosuid,noexec,noauto 1     2
/dev/<zzz>    swap       swap     pri=1                0     0
EOF

The pri=1 on the swap makes it the priority 1 swap file. If you have more than one swap, such as one on an NVMe (where swapping is actually decent) and another on a slower SSD or much slower hard drive you can set that one to pri=2 to give it a lower priority.

Method 2 - Full Partitions

With method 2 we have seven partitions

$ cat ~/lfs/mount.sh
mkdir -pv $LFS
mount -v /dev/<sdV> $LFS
mkdir -v $LFS/boot
mount -v /dev/<sdW> $LFS/boot
mkdir -pv $LFS/usr/local
mount -v /dev/<sdX> $LFS/usr/local
mkdir -v $LFS/var
mount -v /dev/<sdY> $LFS/var
mkdir -v $LFS/home
mount -v /dev/<sdZ> $LFS/home

The */etc/fstab would be
/dev/<ttt>    /          xfs      noatime              1     1
/dev/<uuu>    /boot      vfat      nosuid,noexec,noauto 1     2
/dev/<vvv>    /usr/local xfs      noatime              1     2
/dev/<www>    /var       xfs      noatime              1     2
/dev/<xxx>    /home      xfs      noatime              1     2
/dev/<yyy>    /mnt/alt   xfs      noauto,noatime       0     2
/dev/<zzz>    swap       swap     pri=1                0     0

Make the alternate directory

mkdir -v /mnt/alt

The command to create is

cat >> /etc/fstab << "EOF"
/dev/<ttt>    /          xfs      noatime              1     1
/dev/<uuu>    /boot      vfat      nosuid,noexec,noauto 1     2
/dev/<vvv>    /usr/local xfs      noatime              1     2
/dev/<www>    /var       xfs      noatime              1     2
/dev/<xxx>    /home      xfs      noatime              1     2
/dev/<yyy>    /mnt/alt   xfs      noauto,noatime       0     2
/dev/<zzz>    swap       swap     pri=1                0     0
EOF

where <ttt> is the root partition device, <uuu> is the boot partition, <vvv> is the primary /usr/local partition device, <www> is the /var partition device, <xxx> is the /home partition device, <yyy> is the alternate /usr/local partition device, and <zzz> is the swap partition device. Both the /boot and the alternate /usr/local are not mounted by default.

If you split the log partition and the data partition for any XFS filesystem (Super Speed XFS from chapter 4), such as discussed for /home then the /home entry is as follows

/dev/<xxx> /home xfs noatime,logdev=/dev/<yyy>   0     2

for this option the command is the same as above except the /home entry is as above where <xxx> is the data partition device, and <yyy> is the log partition device.

If you did your own partitioning scheme, then you will need to create the /etc/fstab appropriately.

Regarding the noatime mount option that is listed for all the data partitions, without this option, every time a file is accessed (opened) the access time must be updated in the inode (index node). This isn't the same as modified which is maintained only when the file changes. Updating the inode with the access time for files that are accessed constantly can have a significant performance hit. The noatime option turns off updating the access time for all files on that partition. For most systems, knowing when a file was last accessed isn't important.

The nosuid turns off the "set user to file user" and "set group to file group" for that file system. Since a file owned by root that has the suid bit turned on makes the user the root user during the execution of that file, there is a security risk. You might want to disable suid on the /home partition if it is a multi-user system or you are not a developer. There is little reason so allow suid programs on the /home partition as it is a security risk.

The noexec mount option simply turns of the ability to run binary files stored on that partition. This is given for boot as an additional security protection. Nothing should ever need to run from the boot partition.

Step 2 - Download The Kernel

The next step is to compile and install the Linux kernel. This is a fairly easy process, at least in the beginning. If you want an optimized kernel, things are a bit different. Since we are basing the base system on the LFS method, we won't worry too much about the performance of the kernel initially. Later we will replace it the default kernel entirely so the version you use is fine, provided it is in the Linux 5.0 series. At the time of this writing 5.2.9 is considered the stable kernel. That is a good place to start but if you have already downloaded 5.0 or above you can just use it. If you want the 5.2.9 kernel go to a terminal in the host and do the following

cd $LFS/sources
wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.2.9.tar.xz

if you setup the resolv.conf and have the network working in the host, you can just

cd /sources
wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.2.9.tar.xz --no-check-certificate

from the target.

Step 3 - Create the Source Tree

Back in the target (chroot_to_lfs) create the source tree

cd /sources
tar -xf linux-5.2.9.tar.xz
cd linux-5.2.9

Before we begin the build let's talking about what we are building.

The kernel of the operating system is a program that is responsible for querying, configuring, and initializing the hardware of the system, making that hardware available to programs through services called "system calls." Of course, the kernel might provide many other services as well, but that is the role of an operating system to make the hardware usable by application programs. The Linux kernel consists of two parts.

The first part is the kernel image file, which is a file that contains an image of the kernel that is loaded into memory by the boot loader. It isn't a "program" exactly, as it is the binary image of the code that doesn't require linking, process startup, etc. The boot loader simply loads it into memory and then pass control to it by jumping to the start address (of which the x86 architecture has quite a few to choose from).

The second part of the kernel are the modules. These are pieces of code that are contained outside of the kernel image but can be loaded by the kernel at run time. Each module will provide some desired bit of functionality. Since hardware drivers are only necessary for the current hardware, modules allow the kernel to load only those bits of code it needs for the current hardware as opposed to having all the drivers in memory even if it doesn't need them.

These will be installed on the root drive (/lib/modules) under a directory matching the kernel name (which is part of the compile options.) These will have a file ending with .ko for kernel object.
Many options in the kernel can be compiled directly into the kernel as part of the kernel image, or they can be compiled as modules that the kernel can load. The hardware drivers for various pieces of hardware are always compiled as modules because they are needed only in the case where the hardware exists. But many other options can be compiled as modules as well, such as the various filesystems. Do you think you will ever use an Amiga filesystem? Probably not, so you can simply choose not to support it, making the kernel smaller and faster (with modern CPU caches smaller is faster as we will discuss later.) The kernels that come with binary distros have to support a lot of hardware that may not even by present, but they are designed to run on a larger number of different systems. But this is your distro and it is designed to run only on one system: yours. Later in the project we will ferret out unneeded modules and slim down the kernel, but we'll be using a different kernel, making the work now superfluous.

Although a lot of the kernel can be built as modules we don't want it all as modules. A module can be passed parameters when loaded, a good reason to use modules which is sometimes required for quirky hardware. Many drivers compiled as part of the kernel image will allow changes to be made using the /sys virtual filesystem and others don't. The only requirements we have are to enable the kernel to access the root partition without needing to load any modules otherwise you will need an initial ramdisk (initrd). The more complicated the boot process the harder it becomes to maintain and the slower it is and one of our rule is "keep it simple". I prefer to use the initrd only for CPU mircocode (as discussed later).

Since we used the XFS filesystem for the root partition (or maybe you didn't) we need to ensure that the XFS filesystem modules are compiled in the kernel. If you used another filesystem for the root you'll need to make sure it is compiled in the kernel. The other functionality we need is the bus drivers between the root partition and the kernel. If you are booting off a USB connected drive of any type (USB stick, SDRAM) those will need to be included as well. Since we have a host that has loaded drivers, let's take a look at what it has loaded using the lspci command

# lspci -k
00:00.0 Host bridge: Intel Corporation Device 3e34 (rev 0b)
        Subsystem: ASUSTeK Computer Inc. Device 1481
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 620 (Whiskey Lake)
        DeviceName: VGA
        Subsystem: ASUSTeK Computer Inc. UHD Graphics 620 (Whiskey Lake)
        Kernel driver in use: i915
        Kernel modules: i915
00:04.0 Signal processing controller: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem (rev 0b)
        Subsystem: ASUSTeK Computer Inc. Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem
        Kernel driver in use: proc_thermal
        Kernel modules: processor_thermal_device
00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th Gen Core Processor Gaussian Mixture Model
        Subsystem: ASUSTeK Computer Inc. Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th Gen Core Processor Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Device 9df9 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: intel_pch_thermal
        Kernel modules: intel_pch_thermal
00:13.0 Serial controller: Intel Corporation Device 9dfc (rev 30)
        Subsystem: Intel Corporation Device 7270
        Kernel driver in use: intel_ish_ipc
        Kernel modules: intel_ish_ipc
00:14.0 USB controller: Intel Corporation Device 9ded (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 201f
        Kernel driver in use: xhci_hcd
        Kernel modules: xhci_pci
00:14.2 RAM memory: Intel Corporation Device 9def (rev 30)
        Subsystem: Intel Corporation Device 7270
00:14.3 Network controller: Intel Corporation Device 9df0 (rev 30)
        DeviceName: WLAN
        Subsystem: Intel Corporation Device 0034
        Kernel driver in use: iwlwifi
        Kernel modules: iwlwifi
00:15.0 Serial bus controller [0c80]: Intel Corporation Device 9de8 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: intel-lpss
        Kernel modules: intel_lpss_pci
00:15.1 Serial bus controller [0c80]: Intel Corporation Device 9de9 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: intel-lpss
        Kernel modules: intel_lpss_pci
00:16.0 Communication controller: Intel Corporation Device 9de0 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: mei_me
        Kernel modules: mei_me
00:17.0 SATA controller: Intel Corporation Device 9dd3 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: ahci
        Kernel modules: ahci
00:19.0 Serial bus controller [0c80]: Intel Corporation Device 9dc5 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: intel-lpss
        Kernel modules: intel_lpss_pci
00:1c.0 PCI bridge: Intel Corporation Device 9db8 (rev f0)
        Kernel driver in use: pcieport
00:1c.4 PCI bridge: Intel Corporation Device 9dbc (rev f0)
        Kernel driver in use: pcieport
00:1f.0 ISA bridge: Intel Corporation Device 9d84 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
00:1f.3 Audio device: Intel Corporation Device 9dc8 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
00:1f.4 SMBus: Intel Corporation Device 9da3 (rev 30)
        Subsystem: ASUSTeK Computer Inc. Device 1481
        Kernel driver in use: i801_smbus
        Kernel modules: i2c_i801
00:1f.5 Serial bus controller [0c80]: Intel Corporation Device 9da4 (rev 30)
        Subsystem: Intel Corporation Device 7270
02:00.0 3D controller: NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] (rev a1)
        DeviceName: Second VGA
        Subsystem: ASUSTeK Computer Inc. GP107M [GeForce GTX 1050 Mobile]
        Kernel modules: nvidia_drm, nvidia

(It should now be obvious I am using an Intel-based ASUS laptop with Whiskey Lake i915 graphics and an nVidia GTX 1050 installed in Optimus mode -- a challenging target for any distro manager.)

The -k option will print out the driver the kernel is using for each device. As you can see with my hardware (an ASUS laptop). It doesn't hurt to print this out, grab a screen shot, or simply write them down.

In my case, the root is a SATA drive attached through the PCI Express bus using the AHCI. I need the PCI drivers, the AHCI drivers, and the XFS filesystem drivers to avoid needing an initial ramdisk, just to mount the root partition. Once the root partition is mounted all other options can be modules since they are located on the root. With most modern systems the disks will be AHCI. Very old systems and specialty systems will need more work on your part to determine what is being used.

Step 4 - Configuring the Kernel

The kernel must first be configured before it can be compiled. This is simply deciding what options you want the kernel to support. This includes filesystems, I/O schedulers, timers, hardware device drivers, cryptographic functions, and much more. It might seem daunting, but to build one is fairly easy unless you really want to make a minimal kernel that fully supports your system (a task we'll tackle later).

The configuration is stored in a file called .config (the default) though it can be overridden at compile time.

First let's run through the various common configuration options.

make config - every question is asked on the terminal. Can be used on the console.  Be warned there are hundreds.
make menuconfig - Older text-based menus with color, very standard console choice
make nconfig - Newer text based menus with color, enhanced, nice, great choice for console builds
make xconfig - Configure use QT-based X graphical system (non-console)
make gconfig - Configure using GTK+ X graphical system (non-console)
make oldconfig - Use the existing configuration file and ask only about new symbols
make olddefconfig - Like oldconfig but don't ask anything just sets everything to the default
make defconfig - Make the default config for the current architecture
make localmodconfig - Make a config based on the current .config file and loaded modules. Disables all symbols not needed for the loaded modules
make tinyconfig  Make the smallest possible kernel

There are more options, but this is more than you will probably ever use. You should become familiar with one of the console options (config, menuconfig, or nconfig). You will also be using the oldconfig option to upgrade kernels. Once we get the graphical system up you might switch to one of the graphical methods, but they don't offer a huge advantage over the nconfig option.
Start with the architecture default configuration.

make defconfig

Then adjust it according to our hosts loaded modules. This step will disable any module not currently loaded which will take care of quite a few you don't need

make localmodconfig

There are still a few changes we need and modifying the .config file directly isn't always possible because it will only show the top level of disabled options. Instead we will use nconfig.

make nconfig

which will bring up a screen that looks like this



This shows the main branches kernel configuration screen. Use the arrow keys to move around, the <ENTER> to select, press 'y' to add an option into the kernel directly, 'm' to add it as a module, and 'n' to remove it or, alternatively, press <SPACE> to toggle through all the choices. If an item has braces instead of brackets --{ versus ]-- then it has been enabled by another choice and you can't change it directly (it is a dependency of another choice.)

First we need to move the XFS code into the kernel. Select the following

File Systems --->
   <*> XFS filesystem support
   [*] XFS Quota support
   [*] XFS POSIX ACL support
   [*] XFS Realtime subvolume support

It will look like this



The asterisk (*) indicates the option is added directly to the kernel image and is not a module. This is how kernel configuration requirements are shown. It shows all the choices from the main menu to reach the level of the options to change, and what the options should look like to meet the requirements. You will see this quite frequently if you begin getting deeper into Linux system building. The above options ensure the kernel can use the XFS filesystem at boot without the need to first mount the root and load the module. This is important since our root filesystem is XFS. The same needs to be set for your boot disk bus which is probably AHCI for most disks today (assuming they are SATA.) If you are booting from an SD card it is probably attached through USB and you'll need to add the USB core code and the specific driver for your USB device to the kernel. You can use the host operating system to query this. Put your SD card in the computer, find out which device is added, and then look for this in the lsusb output.

The default configuration will add AHCI to the kernel image so there is not more configuration we need to do. But verify it with

Device Drivers --->
   <*> Serial ATA and Parallel ATA drivers (libata) --->
      <*> AHCI SATA support

We also want to make sure the kernel has the uevent helper disabled

Device Drivers  --->
   Generic Driver Options  --->
      [ ] Support for uevent helper

Step 5 - Make the Kernel image

This step will compile the kernel image. First be sure we are starting with a clean build.

make mrproper

This assumes MAKEOPTS is set to use a parallel build with the -j<number of cores> option.

make

If not supply it on the command line

make -j8

This will take some time the first time you compile. When complete you will have a file called vmlinux in the kernel source directory. This is the new uncompressed kernel image.

The build process will hide the compile commands, but you may need to see them if you have a problem. To see the actual build commands you need to add -V=1 to make command

make -V=1 -j8

You will now each command during the build. This also applies to the module builds below.

Step 6 - Compile the Modules

This is really just a check for differences since the command above compiles and builds the modules. This won't take long. If you were making a single change in a module-only option, such as a driver, this could be used to rebuild only the modules that have changed options in the configuration. Since the above make command actually builds all modules and the kernel image this is just a check.

make -j8 modules

Step 7 - Install the modules

The installation of the modules is handled by the kernel build process.

make modules_install

Now check the contents of /lib/modules. There should be a new directory with the kernel version. In the future you only need to do this, it will trigger the make modules. I only separated them for instructional purposes.

Step 8 - Install the kernel

First, make sure the /boot partition is mounted. If not mount with

mount -v /boot

Then install the kernel. This compresses the kernel and copies all necessary pieces of the kernel image to the /boot directory

make install

You will see a compliant about LILO but ignore that. Now look in the /boot directory. There is now a file called vmlinuz and another called System.map. Let's rename these to something more meaningful

mv -v /boot/vmlinuz /boot/vmlinuz-ttlp-5.2.9
mv -v /boot/System.map /boot/System.map-ttlp-5.2.9

and copy the configuration file to keep it in case we want to rebuild this kernel in the future

cp -v /sources/linux-5.2.9/.config /boot/config-ttlp-5.2.9

The System.map file is a text file that contains the address of all the kernel symbols so the kernel can be debugged easier. It isn't necessary but it will help if for any reason you have need support with the kernel as it allows the debug information to be much more meaningful.

Step 9 - Making the System Bootable

The last step is to use GRUB 2 to make the system bootable. We have two methods of doing so. The older Master Boot Record (MBR) puts the bootloader stage 1 in a special area on the disk at sector 0 which precedes the first partition. The second option is the UEFI boot process. UEFI is preferred since it is much easier to create a multi-boot system and supports different partition tables but it is complicated and a mix of different interpretations from BIOS creators and how it should actually work.

If you already have a bootloader for another operating system, it is easier to modify that one. In that case, add the entry to boot the TTLP kernel from the /boot partition (such as boot/grub/grub.cfg for a system already using GRUB 2 -- just add the proper menu entry). If you choose to do that, you are on your own (and capable of it or you wouldn't consider it.)

MBR

The MBR method is much simpler. First install the grub stage 1 loader in the MBR of the boot disk. This is generally /dev/sda but it is possible it might be another disk based on a setting in your BIOS.
WARNING: This will overwrite your current boot loader. If there is another operating system on the computer it will not be able to boot that OS again (at least without changes we won't cover here.)

grub-install  --target i386-pc /dev/sda

Now create the configuration for GRUB

cat > /boot/grub/grub.cfg << "EOF"
# Begin /boot/grub/grub.cfg for The Toucan Linux Project
set default=0
set timeout=3

if loadfont /grub/fonts/unicode.pf2; then
   set gfxmode=640x480
   insmod all_video
   terminal_output gfxterm
fi

set menu_color_normal=white/blue
set menu_color_highlight=blue/yellow

insmod part_gpt
insmod fat

menuentry "Toucan Linux, Linux 5.2.29" {
   set root=(hd0,1)
   insmod xfs
   linux /vmlinuz-ttlp-5.2.9 root=/dev/sda2 ro rootfstype=xfs
}
EOF

This makes the assumption that the first hard disk contains the boot partition and the first partition is the boot partition. If this is not the case, you need to edit the file and change the

set root=(hd0,1)

to the proper disk (hd0 is the first disk, hd1 is the second disk, etc.) If the boot partition is not the first, you also need to change the number after the comma to the proper partition. For example, if the boot partition is partition 2 on the first disk it needs to be

set root=(hd0,2)

If the partition is partition 3 on the second disk it needs to be

set root=(hd1,3)

I chose to put the set root command in the menu entry stanza (local to the menuentry) as opposed to outside. In this way you can have different boot partitions for different distros which allows you to keep the kernels separated.

This configuration sets the default kernel to the first one list, gives you 3 seconds at the menu to strike a key to select a different one before it boots, and uses the framebuffer if possible (gfxterm). It sets the size to 640x480 to ensure you can see it even on high resolution graphic devices. It then sets the menu colors (you are free to change), loads the GPT partition and FAT filesystem modules (as a precaution--they are probably already loaded). It then creates a menu entry for out kernel which sets the GRUB root (not the operating system root) to the boot partition, loads the XFS module so grub can read XFS filesystems, loads the kernel and allows GRUB to complete the boot.

UEFI

This will be a bit more difficult, but is the better way to go for a UEFI system. First, you will need to turn off "Secure Boot" in your BIOS. Secure boot uses keys to ensure the kernel image and while it is possible, it is beyond the scope of our install. It is much easier to turn off the secure boot which was designed for Windows more than Linux. You must disable secure boot or do a large amount of work on your own. I would use this article from LinuxJournal as a guide to creating the keys if you need Secure Boot.

UEFI has many different standards and computer manufacturers have many different ways of supporting it. While it should be very easy, it often is not. Some systems make it very easy to add UEFI boot loaders using the BIOS, others do not. Some allow a simple "hold down F1 to present a boot menu" and others do not. The proper procedure to getting UEFI to work is a multi-step process that involves adding more code to the base system. Because it is lengthy we will do that next installment.

Copyright (C) 2019 by Michael R Stute

No comments:

Post a Comment