# We Interrupt This Program ..

I’m teaching a course on using and programming a micro-controller, that wonderful small component that handles your car brakes, your microwave, and everything in between. We’re doing experiments on a PIC32 microcontroller, which is configured with a 32-bit MIPS 32 processor running at 80MHz. This particular microcontroller (PIC32MZ7955F512L) is quite high-end, and comes with 512K Flash, 128K RAM, a slew of peripherals including USB, CAN, Ethernet, SPI, I2C, and of course Timers.

So far, we did not touch the topic of interrupts. Instead, we’re using an approach called cyclic executive. You write a tight loop that scans the micro-controller inputs, then decides what to do, then drives the appropriate output. For example, if you’d want to control a LED using a button, then you would write a control loop such as

         void main() {
SystemInitialization();
while (1)
if (ButtonPressed())
SetLED(1);
else
SetLED(0);
}


This approach would technically be called polling, since it is the software that repeatedly scans the hardware inputs (the state of a button) to evaluate whether there is something meaningful to do (turning a LED on or off).

The alternate approach would be to use interrupts, which would involve calling an interrupt service routine when the button is pressed or released. For example, by using the Change Notification Interrupts on IO Ports.

         interrupt ISR() {
}

void main() {
SystemInitialization();
while (1)
SetLED(b);
}


While the example is small, it illustrates one of the major troubling issues with interrupts, namely the loss of clarity in the application logic. In the first example, which uses polling, it’s crystal clear when you’re going to turn on the LED, and more specifically what will cause it to be set. When the button is pressed, the logical next step is to set the LED. The second example is more complicated. When the button changes position, a global flag b will be modified accordingly. Meanwhile, whenever the main loop iterates, it will copy the latest status of the flag to the LED. So the logic of the application is broken out over two functions, with a common variable in between them.

Of course, we could have changed the interrupt service routine to update the value of the LED, as follows.

         interrupt ISR() {
SetLED(b);
}

void main() {
SystemInitialization();
while (1)  ;
}


But, in all honesty, that doesn’t look much better. Now we have a main program that effectively does nothing; it waits for interrupts. Furthermore, we have added program logic (namely, setting the LED on or off) inside of the interrupt handling logic which has nothing to do with the cause of the interrupt itself (namely, the button changing status).

To argue further why the polling version is easier to program and maintain, imagine that we’re adding a second button to the system. Pressing either button will turn on the LED, but pressing both will keep the LED off. In the polling version of the program, this is a simple change.

         void main() {
SystemInitialization();
while (1)
if (ButtonPressed() ^ OtherButtonPressed())
SetLED(1);
else
SetLED(0);
}


In the interrupt service routine version, this becomes tricky, because each button change is tied to a different interrupt service routine. Now, we need to study three different routines in the program to understand what is going on:

         interrupt ISR() {
}

interrupt ISR2() {
}

void main() {
SystemInitialization();
while (1)
SetLED(b1 ^ b2);
}


Note that I’m not saying that interrupts are a bad idea. On the contrary, I think interrupts are really cool. But interrupts can be a source of confusion, especially when one is using them tightly intertwined with program logic, such as when doing bare-metal programming.

An often-heard argument is that interrupts are needed for real-time applications. That is stated almost literally in the Wikipedia article on micro-controllers, as well as in about every book on embedded systems programming you will find. However, this is a misconception. Interrupts do not offer real time behavior. If anything, they make the timing of your application less predictable. They add overhead because of additional context switches, and they cause complex timing effects.

Interrupts offer a mechanism to call a function asynchronously, that is, they allow to conveniently side-step the main logic of the application do go off and do something else (such as making a note of the changed status of a Button). But, building the application logic itself completely out of such side-stepped pieces of logic is complicated. And complexity is the enemy of correctness.

So in the micro-controller class, we’re concentrating on writing Finite State Machines and on cyclic executives that use polling techniques. I’m sure that we will run into a case soon where we can no longer solve the problem at hand using FSMs and a while loop alone. But so far, that hasn’t happened.