Slide Title 1

Aenean quis facilisis massa. Cras justo odio, scelerisque nec dignissim quis, cursus a odio. Duis ut dui vel purus aliquet tristique.

Slide Title 2

Morbi quis tellus eu turpis lacinia pharetra non eget lectus. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Donec.

Slide Title 3

In ornare lacus sit amet est aliquet ac tincidunt tellus semper. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.

Saturday, September 30, 2017

How do I limit software from executing kernel attacks? Seccomp

Hi all,
Today I'll be talking about a security facility called Seccomp.

As we may know there are about 400 system-calls,
you can take a glimpse in the linux tree, here: syscall table. according to the path arch/x86/entry/syscalls/syscall_64.tbl, you can easily understand the table's content varies among other architectures.

The seccomp facility is used to restrict specific system-calls which are invoked by a process, so in other words it's security mechanism similar to a sandbox, which is embedded into the kernel.

We can see below a linked list of filters:

struct seccomp_filter {
 refcount_t usage;
 bool log;
 struct seccomp_filter *prev;
 struct bpf_prog *prog;
};

It resides in the task_struct (sched.h), and holds the actual filters:

struct seccomp {
 int mode;
 struct seccomp_filter *filter;
};

if the program executes an unexpected system-calls then the kernel will terminate the process (kill signal will be sent) since it might be a malicious code.

Probably you are saying to yourself that's COOL,
so how do I set the seccomp filters?

compile the kernel with  CONFIG_SECCOMP_FILTER flag set.

prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER,..)

Apply filters on system-calls. the filters make use of the well known Berkeley Packet filter, which was implemented years ago for packet filtering such as tcpdump.

So who is using secomp?
1) Chrome browser (in chrome's address bar enter: chrome://sandbox/)
2) OpenSSH
3) systemd
4) Firfox OS
5) Docker (Seccomp security profiles for Docker)
6) LXC

if you would like to know if the software is in seccomp mode,
you can easily read /proc/<pid>/status.
there would be a field called seccomp:
0 means SECCOMP_MODE_DISABLED;
1 means SEC‐COMP_MODE_STRICT
2 means SECCOMP_MODE_FILTER.

If the value is 2, once we created the filter and installed it into the kernel now every system call we make will be tested through the list of filters.

So on each system-call the kernel would return one of the 5 return values:

#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
#define SECCOMP_RET_TRAP 0x00030000U /* disallow and force a SIGSYS */
#define SECCOMP_RET_ERRNO 0x00050000U /* returns an errno */
#define SECCOMP_RET_TRACE 0x7ff00000U /* pass to a tracer or disallow in case you are using a debugger*/
#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */

Taken from: http://elixir.free-electrons.com/linux/latest/source/include/uapi/linux/seccomp.h

So let's assume you have read a great article about a new functionality,
to test this functionality you are given access for downloading a shared object.
On the other hand this website might infect you with a malware, so perhaps you should use the seccomp mechanism, since the shared object might execute a malicious code.

so a solution suggestion would be using seccomp, which would filter the unwanted system from being executed.
I have illustrated a flowchart:




So the code I wrote, looks like this:

#include <stdlib.h>
#include <stdio.h>
#include <stddef.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/prctl.h>
#include <sys/syscall.h>
#include <sys/socket.h>
#include <linux/filter.h>
#include <linux/seccomp.h>
#include <linux/audit.h>

#define ArchField offsetof(struct seccomp_data, arch)

#define Allow(syscall) \
    BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, SYS_##syscall, 0, 1), \
    BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW)


void complex_computation(int *);

struct sock_filter filter[] = {
    /* validate arch */
    BPF_STMT(BPF_LD+BPF_W+BPF_ABS, ArchField),
    BPF_JUMP( BPF_JMP+BPF_JEQ+BPF_K, AUDIT_ARCH_X86_64, 1, 0),
    BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),

    /* load syscall */
    BPF_STMT(BPF_LD+BPF_W+BPF_ABS, offsetof(struct seccomp_data, nr)),

    /* list of allowed syscalls */
    Allow(exit_group),  /* exits a processs */
    Allow(brk),     /* for malloc(), inside libc */
    Allow(mmap),        /* also for malloc() */
    Allow(munmap),      /* for free(), inside libc */
    Allow(write),       /* called by printf */
    Allow(fstat),       /* called by printf */

    /* and if we don't match above, die */
    BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),
};
struct sock_fprog filterprog = {
    .len = sizeof(filter)/sizeof(filter[0]),
    .filter = filter
};

int main(int argc, char **argv) {
    char buf[1024];

    /* set up the restricted environment */
    if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
        perror("Could not start seccomp:");
        exit(1);
    }
    if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &filterprog) == -1) {
        perror("Could not start seccomp:");
        exit(1);
    }
 
    complex_computation(buf); /* fuctionality taken form the shared object*/

    printf("Task was completed (no malware was reported)!\n");
} 



for example here below we see the actual strace's dump of the bin file,
which have blocked the actual unlink system-call since the malicious code intention was to damage my file-system:

unlink("/home/gil/my_important_file.txt" <unfinished ...>
+++ killed by SIGSYS +++
Bad system call (core dumped)

Sunday, September 17, 2017

Strace in depth (profiling system calls)

Hi,
Software developers sometime find it necessary to delve into binary files and get a better grasp of what exactly is done under the hood while running an executable, Does it affect the overall system performance?
Which type of system calls are getting invoked?


I spoke briefly few years ago about strace Link.
With strace you can  obtain a lot of information on kernel  calls while your program is still executing, allowing you to follow the flow of the process “live”, and save the strace output, in order to comfortably analyze it afterwards “offline”.

we can easily use strace for this task, but sometimes the huge amount of output might be too cluttered so here are few more tips:

tip #1: strace the output into a file and use the verbose option

               strace -v -o dump_file.txt   bin_file

              The verbose flag would assure you get all arguments per system call
              invocation.

              This way you can get some answers for your questions:

               1) What system calls are employed by application?
               2)  Which files does application touch?
               3) What arguments are being passed to each system call?
               4) Which system calls are failing, and why? (errono)

tip #2: strace by process-id and get time spent on each system call

               strace -v -o dump_file.txt   bin_file
               or
               strace -p <pid> -o dump_file.txt

              The -T Shows the time spent on each system call.  This records the time
              difference between the beginning and the end of each  system call.

tip #3: Apply filters on the system calls

               there are about 400 system-calls, and sometimes we would like to avoid
               getting irrelevant info system-calls. for example we would like
               to investigate only 2 system calls open() close(), so we will use the -e flag:
                    
               strace -e trace=open,close -o dump_file.txt  bin_file


               we can use the groups criteria for filtering, there are 7 categories:

               file          -  Trace all system calls which take a file name as an argument

               process  -  Trace all system calls which involve process management.
                                   like fork, wait, and exec steps of a process.

               network - Trace all the network related system calls.

               signal     - Trace all signal related system calls.

               ipc           - Trace all IPC related system calls.

               memory  -  Trace all memory mapping related system calls.

               desc         - Trace all file descriptor related system calls. 



               for example: Getting all system calls regarding network operations:

               strace -e trace=network -o dump_file.txt  bin_file


tip #4: Getting a much more clear picture of the how the system-calls are
              distributed.


              strace -c -w -S time bin_file

              The -c Count time, calls, and errors for each system call

              The -w Summarise the time difference between the beginning 
              and end of each system call

              The -S Sort the output of the histogram printed by the -c option 
              by the specified criterion. Legal values are time,calls, name, 
              and nothing.

              for example: lets check the system calls distribution when 
                                    invoking dd the command:

              I have generated a simple barplot (with R) to present it more 
              visually:


For summary with strace you can easily analyze and investigate malicious code,
very useful for daily usage.
I hope you enjoyed the post, let me know if you would like me to cover other topics. till next time, bye!

Monday, December 7, 2015

Lets perform some magic with Cgroups!

cgroups is mechanisem for monitoring and managing the computer's resources such as:
  • CPU runtime
  • Memory usage
  • Read/write speed for block device
  • Network bandwidth
As we know Linux is a great system for sharing resources around running applications/process, but let's say this time I wouldn't like to share and distribute my resources equally among processes, I want to guarantee more resources to a specific process .
this could be done via control groups aka cgroups.
Lets say we have one process which is highly important over the others, so I would declare a profile which consists of limits resources and then assign this profile to the process.
similar example can be given while speaking of containers/vm machines which we would like to  prioritize the resources between the containers.
this way we limit the impact of VM machines which hogs the CPUs.

I suggest you to read the kernel's documentation on cgroups which
elaborates very well, here is the link:
https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt

In case your cgroups commands are not installed on your system you can install it right away via (I'm using Ubuntu 14.04 for demonstration purposes):

sudo apt-get install cgroup-bin

after reboot, we can see a folder named cgroup:
cgroups is now located at /sys/fs/cgroup.
list the contents in the folder, you should see the following subdirectories:


Those subdirectories present the control group subsystems which can be managed by you.

In this post I'll be giving three demonstrations each one would be demonstrating managment on a different kind of resource. So let the fun begin!

Example #1 - CPU cores usage

lets say I would like to run a specific process on a specific core, I can easily do it on the fly. you can create a control group under the cpuset folder. and then you should echo the number of cpu you want to assign to the process, for example:

echo 0 > ./cpuset.cpus

run your process, and then assign it's pid to the specific core via:

echo <PID> > ./cpuset.tasks

Below you can see a screen-shot of the graph I took while monitoring the 4 cores, there are 4 intervals which I'll explain:

Interval #1 (50-60 sec):
  • Demonstrates the 4 cores which run on normal load.
Interval #2 (30-50 sec):
  • I have invoked my complexCalculation process, you can easily notice a ramp on CPU3.
Interval #3 (10-30 sec):
  • I have applied cgroup rule, so now the process will run only on CPU, we can see the decline on CPU3 and a ramp on CPU1.
Interval #4 (0-10 sec):
  • I have stopped the process complexCalculation so as we would expect there is a graceful degradation on CPU1.
For getting live updates regarding the cores usage, I have used the top command:

top -p<PID>

press 1 to toggle to Separate-Cpu-States screen.

So what actually happened under the hood?
The hard affinity is stored as bitmask in the task's task_struct as cpu_allowed (see sched.h). The bitmask contains one bit per possible processor on the system (In my case I have 4 CPUs). By default, all bits are set and, therefore, a process is potentially runnable on any processor.
After I have echoed to cpuset.tasks the function sched_setaffinity() was invoked,
we can easily see it via ftrace or by setting a break point.

Example #2 - Limiting memory usage

We can easily write a c program which grabs on each loop iteration chunk of memory of about the size of 5MB.
So after about 15 iterations we have consumed 75MB of RAM.I'll be calling this small app "processWastingMemory".
for avoiding this scenario (wasting memory) we can enter a new rule regarding the memory consumption, the rule would reside at the memory controller:

1) memory.limit_in_bytes (physical memory)
2) memory.memsw.limit_in_bytes (swap usage)

lets create a control group of name "myDemo".

20 MB = 20971520Bytes

echo 20971520 > /cgroup/memory/myDemo/memory.limit_in_bytes
echo 20971520 > /cgroup/memory/myDemo/memory.memsw.limit_in_bytes

now lets run the process/task in a given control groups:

cgexec -g memory:myDemo ./processWastingMemory

So here I'm defining the control groups in which the task will be run. the controller is "memory", After executing the command we can easily notice the program got closed (killed) immediately after reaching the memory limit of 20MB for the process.

We can easily check dmesg which shows the following message:

"Memory cgroup out of memory: kill process"

Example #3 - Read/write speed for block device

Will be given next week with interesting graphs... so stay tune!

Meanwhile enjoy exploring new intriguing stuff in the Linux world! :)

Sunday, September 28, 2014

Likely & Unlikely macros in the kernel

Hey Guys!
Today I'll be explaining about two well used macros in the kernel for better branch prediction via gcc.
but first of all lets starts from the basics and fundamentals, here is a short refresh. As we may know each instruction in c is translated to assembler language, as I have mentioned in the past post Get familiar with gcc compiler,  few years ago.

Each c instruction is translated into assembler instructions,
which pushed into the pipeline. I'll elaborate more about the meaning of pipeline:

Pipeline is processor's mechanism for executing the instructions in parallel.
Moreover as more stages there would be (In the picture I drew, n = 4), we would increase the parallelization property.



on each CPU cycle we shift to the right (the next chain) the current instruction.
So via the parallelization we speed up the execution by fetching the next instruction while the other instruction are getting decoded and executed.
if the pipeline is full, on each cycle tick an instruction will get executed.

of course the number of stages depends on the architecture, for example:
On my BeagleBoard xM (ARM Cortex-A8) implements ARM v7 (32-bit) instruction set architecture consist of 5 stages.

So probably you are asking yourself so how come there are no more stages in nowadays cpu's core. Well although the level of parallelism increases there are few drawbacks which I'll be discuss with you now:
1) Core Latency
The actual time (latency) to execute the instruction would get larger as we add more stages cause in more cycles we would fill the pipeline.
2) True Data Dependency
If two consecutive instructions are fetched/loaded into the pipeline such as:
First instruction: INC_REGISTER R1
Second instruction: INC_REGISTER R1
I'll demonstrate here let's say register R1 consists the value 0x4447, after 3 cycles the value would be 0x4448, and then in the next cycle,
the second instruction gets executed and would still hold the value 0x4448 since the initial value was 0x4447 for the second instruction
too. So my conclusion was we/compiler should avoid instructions which have dependencies from the previous cycle.
3) Procedural Dependency (branch instruction)
In case we have a branch and the condition is satisfied the consecutive instructions are already preloaded into the pipeline we will execute those instructions, but eventually after few loop iterations we would fail on the condition branch, so we should get rid of all the instructions which were
loaded into the pipe, and fetch all new sequential instructions which appear along our new flow.
For getting rid of the irrelevant instructions you
the core flushes those instructions. This kind of operation of changing the program counter unpredictably can easily reduce the performance of the processor.


So now after I clarified the third problematic scenario, I'll now talk about
In the kernel there are two well-known macros, which I use quite often:
likely and unlikely, those macros take advantage of the gcc compiler that can optimize the compilation of the code based on that information.
In case you are quite curios you're more then welcome to check the outcome of using those macros, In case you were wondering how the assembly code would be set for getting an optimization for the processor pipeline. write down a code snippet, and afterwards compiled it via gcc with optimization flag on: gcc –O2
For example I wrote down in my vim editor the following short snippet:



Afterwards you can take a look of the disassembled the binary file via:
 objdump -S   .

modify the code to the likely macro from the unlikely instance.
So here is the neat results which I got, the comparison between the two is presented in the meld window (gnu diff program):


Likely macro Vs Unlikely macro

We can easily see the compiler have generated the assembly code (x86) with arranging the code according to the likelihood of the branch,
(I have marked the different assembly lines with colourful rectangle)
So here above we got a simple nice demonstration of avoiding the penalty of flushing the processor pipeline.

I hope you enjoyed today's session, next time I'll be more lifting the hood about the kernel stuff! enjoy!

Tuesday, July 22, 2014

The truth behind Linux signals

Today I'll be talking about signals in Linux.
Along to this discussion i'll be giving  two code examples so you would get a better grasp on the topic. So let's start:

Signals is another way of communicating a between either the kernel and a process or among the processes (IPC mechanism), they are usually called software interrupts since they occur asynchronous.

There are 32 types of signals (as mentioned in NSIG, see file: signals.h), each signal is triggered when specific scenario occurs.for each signal there is a default registered  signal handler, which would be invoked in such specific scenario.

If we would like to register and use our proprietary signal handler instead of OS's signal handler, we can make some good use of the system call signal().

The list of signals can be easily view in the header file: signals.h
moreover we can send a signal to a specific process via the system call kill, mentioning the pid and the signal number.

I'm quite sure for those of you programmers out there, during your coding journey you have probably received few Segmentation faults along the way. such as the following message:

"line 45: 27702 Segmentation fault      (core dumped) "

As you may know this popular message is generated in response to a segmentation fault which occurred while de-referencing an invalid memory.
There could be many reasons for it, such as:
A process tried to access a part of kernel memory while running in user mode, or a process tried to modify a variable which
is located on read-only memory (text segment or mapped memory marked as read only) - a simple example will be given later.
so eventually the operating systems invokes a signal SIGSEGV.

The SIGSEGV is a single example of signal among the 32 which exist.

The signal SIGINT is quite useful too in Linux, for terminating the foreground process.

Another well known is SIGFPE is invoked when an illegal arithmetic operation have been done, such as dividing by zero.

Here is a quite straightforward program which I wrote,
which demonstrates signals in action:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
#include <stdio.h>
#include <signal.h> /* For using SIGINT/SIGSEV/SIGFPE defininations and the function signal */
#include <stdlib.h>
#include <stdint.h>

int my_global_sighandler(int signum)
{
 switch (signum) 
 {
 case SIGINT:
   printf("\nAttenion: An interrupt signal was invoked!\n");
   break;
 case SIGSEGV:
  printf("\nAttention: a segmentation fault occured\n");
  break;
 case SIGFPE:
  printf("\nAttention: an Arithemtic error occured \n");
  break;
 }

 printf("\nProcess (PID=%d) has recieved a signal of type  \"%s\" (message from my signal handler)\n",getpid(),strsignal(signum));
 exit(EXIT_FAILURE);
}

int main(int argc, const char *argv[])
{
 char *s = "Introductions to signals";
 sigset_t int_mask;
 int  a = 5;
 double res;
 uint8_t user_pick;
 
 /* Registering few Signals to signal handler: my_global_sighandler */
 signal(SIGINT,my_global_sighandler);
 signal(SIGFPE,my_global_sighandler);
 signal(SIGSEGV,my_global_sighandler);

 printf("\n****Coding for pleasure - Today's talk is Linux's Signals****\n");
 printf("There are 32 signals, but here I'll demostrate three common signals, so lets start:\n");
 printf("\nWhich signal you would like to generate?:\n");
 printf("1) SIGINT - Interrupt \n2) SIGFPE - Floating point exception \n3) SIGSEGV - Segmentation fault\n");
 scanf("%d",&user_pick);
 printf("\nGenerating signal ...\n");

 switch (user_pick) 
 {
 case 1:
  printf("\nPress Ctrl+c for invoking SIGINT signal\n");
   while (1);
   break;
 case 2:
  printf("\nGenerating an Illegale arithmetic calculation for invoking SIGFPE signal\n");
  res = user_pick / 0;
  break;
 case 3:
  printf("\n");
    *s='H';
  break;
 default:
  printf("Wrong number, should pick 1-3\n");
  break;
 }
}  

We can see the pointer s is assigned a string which is located on the read only memory segment. so any change to the value pointed by s, would invoke a segmentation fault.
For getting a better grasp about this error you can easily see for yourself by printing the address the of the string and then see in /proc/<process pid>/maps in which memory segment range the string falls into.

More to say about signals, signals also can be blocked too. In case you have put some thought and came to conclusion your process should be  sleeping/waiting and block any signals heading your way, so you won't be interruptible. you can do this easily via calling function sigprocmask, soon I'll be elaborating on the subject more.

A great reference about signals handlers can be seen here:
http://osr600doc.sco.com/en/SDK_sysprog/_Signal_Handlers.html

Wednesday, May 21, 2014

Device tree for my Beagleboard-xM

Hey Guys!

About 3 years ago I purchased my  beagleboard-xM (rev B) over the web.
For those who are less familiar with this toy, it's a SoC equipped with ARM Cortex A8, serial interface, I2C etc. (For more details)
 
So back in those days, after setting it up (partitions, u-boot , image kernel) the operating system which called Angstrom (kernel version 2.6.17) As you may guessed?!

I decided to join few google groups  which were discussing about any subject regarding the new toy in the block :-)

but since then, many things have changed and few days ago I decided to update my kernel to the latest. eventually I decided to move to Ubuntu distribution, and give a try to Ubuntu LTS (3.14.2) .

After settings things up, I tried yo lunch some of my past written code which were written for the old kernel. but apparently the subsystem for muxing the pins has been changed.
Eventually I decided to delve into the subject, and see how come I can't interact with  those gpios.

After some reading I found out, since Linux kernel 3.7 (ARM family) a new method was introduced for describing the hardware, by this I mean a well configured Device Tree should be deliver to the uboot.
A device tree is a data-structure which is responsible for describing the hardware on the system. such as:
  • The number and name of CPUs running on the system
  • Base address and size of the RAM
  • The buses 
  • The peripheral device connections, such as gpios, which i'll be talking about it now.
This data-structure is loaded into the kernel during boot time. before the kernel is loaded. the device tree can be easily configured since it is stored as a readable file with the dts extension., So the developer can modify the tree according to his own needs.
example for a device tree:

Before the Device Tree was introduced to the ARMs, the kernel was actually storing this valuable information inside himself ( either the binary image uImage or zImage), but now two binary files are supplied to the u-boot:
  1. device tree blob
  2. uImage/zImage

Comment: A complete coverage about the device tree can be found at the linux kernel directory:   /Documentation/devicetree


So After configuring the gpios in the device tree, we should compile the this source file, via device tree compiler :


The command:
dtc -O dtb -o omap3-beagle-xm-ab.dtb -b 0 -@ omap3-beagle-xm-ab.dts

In case you got lucky you haven't received any syntax errors, you good to go!
now we should order the kernel to use this updated dtb file, we do that by simple overwriting the corresponding file (I suggest you to backup first the original dtb file) in the directory:

/boot/uboot/dtbs/

now you should reboot and check if the system recognized your attached device.

That's all for now, I hope you enjoyed today's section. see ya on the next post!


P.S

I have explained in short the configuration settings, since in my opinion it's less interesting, but for giving you a good start you should read thoroughly the pages taken from:
  • The "Technical Reference Manual" of the Texas Instrument's processor (Pages 2,444-2,453).

Tuesday, May 20, 2014

Removing Linux kernel images in a snap of finger!

Recently I have been writing and modifying some code in the Linux kernel tree,
I have been using the  configuration file: .config, which was generated via localmodconfig.
probably you're asking yourself what is localmodconfig?

I'll elaborate about it, it's a tool which generates the .config file.
The generated .config file is quite slim compare to the default distribution kernel's configuration,
since many unnecessary kernel modules are not getting compiled during compilation phase (make modules_install),
so it's compilation state is much shorter in time.
During this installation routine, few basic steps are taken:


  1. Copies the final image to the folder /boot .  You can easily recognize the file since the name consists the prefix "vm-linuz-" following with the Kernel-version name.
  2. Copies the compiled kernel modules to /lib/modules and other necessary stuff while working with modules (module dependency trees, etc.)
  3. Modifies the /boot/grub/grub.cfg, so now your fresh linux kernel image entry was added to the GRUB menu. Check it while rebooting your system.  

Some times it occurred to me, that I need to get rid of those old kernel images which are installed on my system.
Doing it manually annoys me, So I decided to write a script which does the work for me.


take a look, see below:


  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
#!/bin/bash

option=1
names=""
cd /boot
clear
echo "The kernels which are installed on your system are:"
for file in ./vmlinuz-*
do
 temp_kernel_name=${file:2:`expr length $file`-2}
 temp_kernel_name=`echo ${temp_kernel_name} | cut -d "-" -f 2-7`
 echo "[${option}] " ${temp_kernel_name}
 names=${names}" "${temp_kernel_name}
 let option=option+1
done
echo "[${option}] Exit"

exit_code=option

echo "Pick the linux kernel you would like to remove from your system?"
read user_pick

if [ ${user_pick}==${exit_code} ]; then
 echo "Exiting... Bye!" 
 exit -1
fi

kernel_name_to_remove=`echo ${names} | cut -d " "  -f ${user_pick}` # f option in cut commad holds the number of field after the space delimeter
kernel_version=`echo ${kernel_name_to_remove} | cut -d "-" -f 1-2`

if [ `uname -r` == ${kernel_name_to_remove} ]; then
 echo "attention: Can't remove kernel, the current kernel is running on your system!"
 echo "please check your request, exiting the script..."
 exit -1
fi

echo "Are you sure you would like to remove kernel: " ${kernel_name_to_remove} "? [y/n]"
read ans

if [ "n" == ${ans} ]; then
 echo "please re-think about it, exiting the script..."
 exit -1;
fi

echo "Starting to remove kernel: " ${kernel_name_to_remove} 
echo "Kernel version: " ${kernel_version}

mkdir -p /boot/removed_kernel_images

# Step 1
for file in ./*${kernel_version}*
do
 temp_file_name=${file:2:`expr length $file`-2}
 echo "removing file: " ${temp_file_name} 
 mv ${file} /boot/removed_kernel_images
done

echo "Finished step 1!"

#Step 2
mkdir -p /lib/modules/removed_kernel_modules
mv /lib/modules/${kernel_name_to_remove} to /lib/modules/removed_kernel_modules
echo "Finished step 2!"

#Step 3 modifying: /boot/grub/grub.cfg
mkdir -p /boot/grub/grub_conf_files_removed

if [[ ! -f /boot/grub/grub_conf_files_removed/grub`date +"_%m_%d_%Y_%H_%M_%S"`.cfg ]]; then
 cp /boot/grub/grub.cfg /boot/grub/grub_conf_files_removed/grub`date +"_%m_%d_%Y_%H_%M_%S"`.cfg #backing up the file
fi

cd /boot/grub
res=`grep -n ${kernel_name_to_remove} /boot/grub/grub.cfg | cut -d : -f 1`
echo "res = " ${res}
echo ""
echo ""
echo ""
echo ""

number_of_matches=`echo ${res} | wc -w`
#echo "number_of_matches = " ${number_of_matches}
last_line=`echo ${res} | cut -d " " -f ${number_of_matches}`
let last_line=last_line+1 #removing the last bracket too
start_line=`echo ${res} | cut -d " " -f 1`

echo "start_line = " ${start_line}
echo "last_line = " ${last_line}

numbers=`seq ${start_line} 1 ${last_line}`

#echo "numbers = " ${numbers}

for row_number in ${numbers}
do
 sed -i ${rownumber}" d" /boot/grub/grub.cfg
done

echo "Finished step 3!"

#Step 4:
update-grub
echo "Finished step 4!"

cd -

echo "Finished, Bye :-)"

Fill free to grab the script in my GitHub repository: 

https://github.com/codingforpleasure

So that's all for today, next time I'll be talking about exciting concept of device trees, See you till then!

About