What I Learned (off) Work this Week: Don’t Forget Your Algos!

Mike Diaz
9 min readNov 8, 2020

--

Photo by Startup Stock Photos from Pexels

I’m taking a slightly different direction for my blog this week. I usually like to write about new topics I’ve come across in the workplace to help myself learn more about them. About two weeks ago, however, I was working on a problem outside of work with a student from The Knowledge House. If you haven’t heard of them before, click that link to learn more. Here’s an excerpt for their mission statement:

We educate, empower, and mentor New York City residents, from low-income communities with the technology and workplace skills needed to launch a successful career in the technology industry.

Needless to say, they’re great and I’m grateful that I get to do even a little bit of volunteering for them. A student reached out to me to go over some recursion problems that she had been assigned. It was a tough spot for me because I’ve always struggled with recursion and, honestly, we got through about 0.75 problems in 45 minutes. Credit to the student, who later followed up and shared the solution to the problem. Since I had so much trouble with it, I wanted to review it here.

Recursive Exponents

Here’s the problem we were given:

// Compute the exponent of a number.
// The exponent of a number says how many times the base number is used as a factor.
// 8² = 8 x 8 = 64. Here, 8 is the base and 2 is the exponent.
// exponent(4,3); // 64
var exponent = function(base, exp) {
};

For any reader needing a refresher: a recursive solution must involve a function that calls itself to create a loop. So we placed a call to exponent inside of the definition for exponent:

var exponent = function(base, exp) {
return exponent(base, exp — 1);
};

The challenge with recursion when compared to some other loop logic is that there isn’t as much built-in control over when the loop will run. The second required piece of a recursion loop is a conditional that will eventually stop it from running. For this problem, we could use our exponent as a sort of counter — we want to multiply a number by itself a number of times relative to that exponent (one time if the exponent is 2, two times if the exponent is 3, etc). Let’s put all that logic together:

var exponent = function(base, exp) {
if (exp === 1) {
return base
} else {
return base * exponent(base, exp -1)
}
};

If we start with a base of 2 and an exponent of 3, we’ll fail the if conditional and hit the else, which multiplies the base (2) by the result of the function with a second argument of 2 instead of 3. The result of that function is 2 * exponent(2, 1), which hits the if and returns 2. Our recursion ends and the result is 2 * 2 * 2, which equals 8. 2³ is 8, so it works!

Exceptions

It’s always good practice to think of exceptions to the standard input, or gotchas, when studying algorithms. For example, if an exponent is 0, the return should always be 1. We can add that to our logic:

var exponent = function(base, exp) {
if (exp === 0) {
return 1
} else if (exp === 1) {
return base
} else {
return base * exponent(base, exp -1)
}
};

But here’s where we got stuck. What if an exponent is a negative number?

Mathematically, if a base is given a negative exponent, it should be divided by itself rather than multiplied. This is often represented by making the base a denominator in a fraction where 1 is the numerator. So if 2³ means 2*2*2, 2^-3 means 1 / 2*2*2

Recursion made this very difficult for us. We wanted to calculate a value and then perform one final operation on it, which can’t really work recursively. We needed a function that could run the same way every time it was called, so when we realized that we didn’t necessarily have to calculate the denominator first, I figured we had cracked the problem. After all 1/ (2*2) is the same as ½ * ½. So we refactored our function to look like this, which also accounts for a base of 0 or less:

var exponent = function(base, exp) {
if (exp === 0) return 1;
if (exp === 1) return base;
if (base < 1) return 1;
if (exp > 0) {
return base * exponent(base, exp-1)
} else {
return 1 / base * exponent(base, exp+1);
}
};

The test suite for the exercise used a base of 5 and exponent of -4, so we would expect our function to run ⅕ * ⅕ * ⅕ * ⅕, which is 1/625, or 0.0016. But we failed our test because our result was 0.0016000000000000005

Floating Point Numbers

As I’ve written before, computers at their core can only understand binary values. Any numbers besides 0 or 1, or any other character or piece of data for that matter, must at some point be translated into binary code to be processed by a computer. You’ve probably seen how integers are converted to binary numbers such that 1 is 1, 2 is 10, 3 is 11, and so on. But what happens when we want to represent non-whole numbers in binary? Give it a try over at RapidTables and see what you find.

The binary versions of non-integers like ½ or 3.14159 can be referred to as floating point numbers. Representing these non-integers is important, but as we see above, they can take up a lot of space. So rather than save the full binary value directly into our memory, most numbers are represented on computers via scientific notation. As a reminder, here’s a template for the values in a scientific notation formula:

The significant, also known as the mantissa, “represents the significant digits” of our number. I would describe it as the number we’re starting with — we’re going to multiply this number by a base and exponent, so in the most common scientific notation, it’s going to look most similar to the classical representation of our number:

510 => 5.1 * 10²

5.1 is our significant. Next comes the base, which represents the “numeric system base.” It will almost always be 10, for decimal. But if we were using binary, we might have a formula that looks like this, where the base is 2:

10 => 1 * 2²

The exponent represents how far and in what direction we’re going to move our decimal point, or radix. If the number is positive, our decimal is going to be moving to the right, making the number larger:

1.5 * 10⁵ => 150,000

If it’s negative, the decimal will move to the right and the number will be smaller:

1.5 * 10^-5 => 0.000015

By using scientific notation, our computers can use binary digits to represent a much wider range of values compared to direct translation from binary to decimal. Many programming languages allow us to declare data types that associate different amounts of space for their values, usually somewhere between 8 bits and 128 bits. JavaScript doesn’t do this, but instead defaults numeric values to the double-precision format (64 bits). The 1s and 0s that fill up these bits are used to represent the significant, the exponent, and the sign (the base doesn’t have to be represented since it always defaults to 10). Here’s a really helpful image of how this breaks down:

We haven’t brought up the sign yet, but it just represents whether the value is positive or negative. Hence the need for only one bit. In this image, 11 bits are devoted to the exponent and a whopping 52 to the significant. Imagine how many different values we can represent with these options!

If we go back to RapidTables and check on the binary value of 3, we’ll see that it is 11. What would that look like in floating point representation?

0 10000000000 1000000000000000000000000000000000000000000000000000

The sign is 0 for positive, the exponent is 1, and the significant is…also 1? Wouldn’t that be:

1 * 10^ 1 => 10 // what…

The final piece to this puzzle is that a 1 is always appended to the front of our significant because every non-zero binary number starts with a 1. Making this automatic saves us a digit in the significant space, giving us even more combination possibilities! With that implicit 1 in the mix, we get our expected result:

1.1 * 10¹ => 11 // yay!

What does this have to do with your algo?

It was really hard for me to understand the connection between this logic and my result of 0.0016000000000000005. But it comes down to the fact that conversions from fractions or decimals to binary values don’t work cleanly if the fraction’s denominator is not a power of 2. In my case, the denominator was 5, so the value of ⅕ could not be represented finitely in binary code. Ultimately, it ends up looking something like this: 0.001100110011… If you want to learn more about why this is, I highly recommend the article I used to write this post.

When we try to represent this value with floating point, the significant is going to look just like the binary: 00110011 etc, which means eventually it’s going to run out of bits and have to round. And if we round up, we’re going to have a tiny little 1 at the far end of our result. This is handled well when we run just a single operation, like 1 / 5, but once we start performing mathematical operations between two non-integers, we see strange rounding behavior. The most well-known example is 0.1 + 0.2 !== 0.3.

Did your answer count as correct?

As we can see from the 0.1 + 0.2 example, JavaScript doesn’t consider their slightly-off result as the same as our desired result. While I felt that the logic was sound, there was a better way of handling the algo anyway. Negative exponents are often represented as multiplying fractions, but it’s also correct to say that we’re repeatedly dividing, just like we’re repeatedly multiplying when dealing with a positive exponent. We know that 5^-4 is 1/ 5⁴, but that also happens to be equal to:

5 / 5 / 5 / 5 / 5

I have to admit that I couldn’t figure this out, but the student I was volunteering to help came back to me later in the day with the solution. Ultimately this was the code that passed all the tests:

var exponent = function(base, exp) {
if (exp == 0) return 1;
if (exp == 1) return base;
if (base < 1) return 1;
if (exp > 0) {
return base * exponent(base, exp-1)
} else {
return exponent(base, exp+1)/base
}
};

Why does this work where the other code failed? The simple answer is that we’re not running any calculations that involve two non-integer values. If one of our values is an integer, the computer does not have to execute the rounding that results in unexpected values. I wish I could more clearly explain why, but I find it very challenging to understand, so I once again point you to the work of Max Koretskyi if you’d like to learn more.

Software Engineering is unique compared to every job I’ve ever done because of just how much I learn every day. Even in a situation where I was supposed to be acting as the mentor, I learned something from a student and came away with even more new knowledge beyond that. It can sometimes feel impossible to understand everything — I’m still pretty fuzzy on the very concept I’ve been writing about — but I’m starting to understand why that’s an opportunity rather than a setback. We’re all getting better every day. Keep it up!

Sources:

--

--

Mike Diaz
Mike Diaz

No responses yet