What I Learned at Work this Week: Marking Time From the Window

Mike Diaz
7 min readJul 25, 2021

--

Photo by Giallo from Pexels

I’m a Solutions Engineer. And like many Solutions Engineers, it’s my aspiration to someday work as a Software Engineer. That’s one of the reasons I write this blog every week — because I have a lot to learn if I want to make it.

One piece of advice I’ve heard is that reading pull requests (PRs) is a great way to learn because you can see the code an engineer is writing and follow their thought process on how they solved a problem. I reviewed a PR this week that used a function I wasn’t familiar with: window.performance.mark. Needless to say, the advice paid off: I got to see how a Software Engineer executed their task and now I get to share it with all my readers.

Window.performance

Using the mark function is very straightforward, so before we get there, let’s explore our parent object: window.performance. Here’s a screenshot of the object, which inherits its attributes from a Performance Interface, in action:

Here we see seven properties. Four of them, memory, timing, timeOrigin, and navigation, are either deprecated or non-standard according to MDN. But they’ve got data in my case, so we might as well try to understand what that data means.

1. memory

This is one of the non-standard properties, meaning it’s not supported by all browsers. Since I’m using Google Chrome, I was able to see that it contains details about HeapSize, which describes the amount of space allocated to memory. The numbers we see here are all in bytes, meaning we’ve got about a billion free bytes of space on my current page. I wasn’t sure about the difference between totalJSHeapSize and usedJSHeapSize, but according to this Stack Overflow response, the latter specifically refers to space occupied by JS objects, but the former includes allocated space that isn’t currently being filled.

2. timing

The names of the keys here are pretty self-explanatory — they each represent an event as part of our webpage loading. This object provides latency data by adding a timestamp for each event, which we could use to compare them if we wanted. But what do the numbers mean?

Those long numbers (13 digits long, so they’re in the trillions), represent Unix time. They are the number of milliseconds that have passed since January 1, 1970 at 0:00:00 UTC. Unix time assumes 86,400 seconds in a day which is 60 seconds in a minute x 60 minutes in an hour x 24 hours in a day. Once we start calculating the number of seconds in a year, however, things get tricky because of differences between atomic clocks and solar time. Unix time accounts for what are called “leap seconds,” but we won’t get into that. Working under the assumption that there are 364 days in a year (more accurately there are 364 + a fraction of a day), that means there are about 31,449,600 seconds in a year (31.5 million).

Multiply that by the 51 years since 1970 and we get 1,603,929,600, which when converted to milliseconds is just about 20 billion less than what we’re seeing in the screenshot. And that makes sense, because it’s actually been 51 years and almost 7 months!

3. timeOrigin

This value indicates very specifically when performance measurement began. It is also in Unix time and is an instance of a high resolution timestamp, meaning that it has decimal points that represent microseconds (1/1000th of a millisecond). It’s accurate to within 5 microseconds.

4. navigation

This object provides information about how the page was reached and how it loaded. The type property has four options that correspond to how the page was reached. 0 means traditional navigation (technically TYPE_NAVIGATE). That means the page was reached by following a link, a bookmark, a form submission, or a script, or by typing the URL in the address bar. Other options include 1 (TYPE_RELOAD), 2 (TYPE_BACK_FORWARD), or 255 (TYPE_RESERVED, meaning literally anything that isn’t covered by the first three). The other property, redirectCount, is exactly what it sounds like — how many redirects before the page was reached.

I couldn’t find much on eventCounts, other than it apparently comes from an interface that is supposed to display the number of events of a certain type. I wasn’t satisfied with that explanation because size doesn’t seem to be an event type to me. It stands to reason that this counts the total number of events on a page load, but I’m seeing 36 across the board for different tabs so I’m skeptical of that.

The onresourcetimingbufferfull property is an event handler that is triggered…when the resource timing buffer is full. Since it’s set to null, we know that nothing will happen in that instance. The resource timing buffer determines how many events we can track in our performance object. If we want to track more than the current setting, we can use onresourcetimingbufferfull to increase the count when necessary.

[[Prototype]]: Performance

The Performance Interface itself deserves its own section because it defines not only the properties we’ve reviewed, but the functions that make this window object useful to us. When I reviewed that PR at work, I saw three functions: mark, measure, and getEntriesByName.

The purpose of the PR was to determine how much time elapsed between two different events on a page. Our code triggers both events, so the engineer had invoked mark on each one. The logic looked something like this:

const executeFirstEvent = () => {
// event logic
window.performance.mark('the_first_event')
};

The function returns a PerformanceMark object that includes a timestamp of when it was created. That value is also stored in the browser’s performance entry buffer, which we can reference later. The object looks like this:

The properties mean exactly what you think they mean

startTime is a DOMHighResTimeStamp, meaning it’s counting in milliseconds. It represents the time that has elapsed between the initial page navigation and the mark. The number is so high because my window has been open overnight.

If we want to find the object in the performance entry buffer, we can use getEntriesByName (the Interface also includes getEntries and getEntriesByType).

This returns an array of matching PerformanceMark objects, which in this case consists of the one we just set. The measure function also accepts entry names as arguments, identifies their startTimes, and creates a new entry that details the difference:

This function returns a PerformanceEntry, just like mark does (but this time the type is measure instead of mark). The first argument we provide is the name of the entry. The next two are the startMark and endMark. As we can see, the returned object tells us when the first mark began (startTime) and the difference in timing between them (duration). It took me about 6 seconds to mark my second event after I hit enter on the first.

This function can be executed with only one argument, which will return how long the window has been open in the duration property. If we only pass two arguments, the second will be startTime and the return will include how much time has elapsed since that entry. We can also pass null as the second argument and an entry of our choice as the third. As you might expect, this will return the time between initial navigation and the entry we passed. To see some examples, check out MDN.

Peak Performance

When it comes to engineering, something as simple as setting a timer can get complicated in a hurry. I’m grateful for the abstractions that the Performance Interface provides, and on the flip side I enjoyed learning a bit about things like Unix time and event buffers. Never be afraid to click that link and learn something new. If it’s too obscure, we can always walk away and try again another time.

Sources

--

--

Mike Diaz
Mike Diaz

No responses yet