What I Learned at Work this Week: Sharing My Chrome Extension

Photo by ROMAN ODINTSOV from Pexels

A few weeks ago, I wrote about building a Chrome extension that would help my team check on the filter status of certain conditionally rendered elements. After finishing a draft of the extension, I was all set to publish it to the Chrome Web Store, but paused when I realized there was a small fee to register as a developer. The next Monday, I asked around to see if our company had a developer account that I could use to publish the extension. It turns out that there’s a good way to share a Chrome extension within a select group without publishing it at all!

There are at least two ways to start using an extension with Chrome. Traditionally, we could go to the Chrome Web Store, select an extension from the list of published options, and click “Add to Chrome.”

If I had published my extension, I could have instructed all my teammates to do this to start using it. There’s not really anything wrong with this except for the fact that it would have abstracted the logic from my teammates and made the extension difficult to iterate on in the future. Even if I had uploaded it to GitHub, that version wouldn’t necessarily always be up to date with what I had sent to Google.

During development, I had been running the extension locally on my browser using the “Load unpacked” feature in Chrome Developer mode. This allows me to upload an extension directory from my local machine to my browser:

The Developer Mode toggle is on the opposite side of the browser on chrome://extensions

If I wanted to share the extension with a few of my coworkers, I could upload the files onto our company directory plus a README with instructions on how to pull the files down and load them onto the browser. I wanted to make it as easy as possible for my team, so I had to brush up a bit on my bash and webpack. Let’s take a look:

My packaged extension ended up looking something like this:


These files, with the right contents, can be loaded into Chrome and run as a simple extension. The manifest contains metadata and loading instructions for the extension, popup.html defines the extension’s display, and popup.js controls the logic. The logo files are just images for the extension’s thumbnail on the browser.

I could have just add these to GitHub as is and tell people to pull them down and select them after clicking Load unpacked. But I wanted to follow the pattern I saw from other extensions which involved compiling the JavaScript to make it more web-friendly. And even though I have a very small amount of JS in my extension, it’s probably still a good idea to compile.

I’ve written about webpack before, so here’s a quick refresher: webpack can bundle multiple JS files into one, which is useful for deploying a script to the web. Webpack can be used in conjunction with plugin providers like Babel to transpile and/or compile our JS making it smaller, more browser-friendly, etc. It works in conjunction with package.json, which I also had to write but won’t detail in this post.

This sounds like something we want, so how can we run my extension code through a bundler and compiler before loading it into the window? Let’s write a build script:

#!/usr/bin/env bashset -ewebpackcp src/popup.html dist/popup.html
cp src/manifest.json dist/manifest.json
cp -a src/images/. dist/images

This script was the biggest challenge for me because I’m very unfamiliar with bash and I had to base this off a much more complicated version that had been written for another extension. Ultimately, I realized that the intent here is just to compile the JS and copy the necessary files to a specific location so that they can be loaded.

The first line starts with #!, also known as a sha-bang. The path after the sha-bang directs our program to the file that will interpret the commands in the script. In this case, we’re going to run the bash version that’s found in our current environment (env).

The next command we see is set -e. According to this post on Server Fault, it tells our shell to exit immediately if any of its processes fails with a non-zero status. For something as simple as this, it’s likely completely unnecessary, but the theoretical utility is that our shell can let us know as soon as it hits a block with an error. Reading further into the Server Fault post, it sounds like the downside is that it wouldn’t be ideal for larger scripts to immediately fail as soon as they came across a single error.

Next we see webpack, which runs this webpack file:

const CleanWebpackPlugin = require('clean-webpack-plugin');
const path = require('path');
PROJECT_DIR = path.resolve(__dirname);
SRC_DIR = path.join(PROJECT_DIR, 'src');
DIST_DIR = path.join(PROJECT_DIR, 'dist');
module.exports = {
entry: {
popup: path.join(SRC_DIR, 'popup.js'),
output: {
filename: '[name].js',
path: DIST_DIR,
module: {
rules: [
test: /\.js$/,
exclude: [/node_modules/],
use: [{
loader: 'babel-loader'
plugins: [
new CleanWebpackPlugin(['dist']),

We see some pre-ES6 imports by defining constants and using JavaScript’s require function. clean-webpack-plugin removes certain files that are created during a build and path allows us to define paths in our system as variable names. We do that on the next three lines by defining three constants: PROJECT_DIR, SRC_DIR, and DIST_DIR.

We use two methods of the path object: resolve and join. These are both used to create paths, but join is a bit more flexible in that it can accept arguments that build either an absolute or relative path. Before we get there, we set PROJECT_DIR with this:


__dirname isn’t defined anywhere in this file, but is instead a variable that equates to the absolute path to the directory containing the source file. Our first constant is therefore an absolute path to wherever our webpack lives, which also happens to be the directory that’s housing our project.

Next we use path.join to create paths that lead to an src directory and a dist directory. It’s worth noting that, when webpack is invoked, the dist directory doesn’t exist. But if we continue working our way down the file into module.exports, we see that in entry we select popup from SRC_DIR/popup.js and output a compiled version of that file into DIST_DIR. Since it doesn’t exist, it is at this point created for us.

The module property of module.exports details what plugins are run during our compilation. We check for any file ending in .js with test, excluding anything in node_modules. When we find the JS file or files, we run it through babel-loader, which transforms some of the syntax. Tada! We have a shiny new (though mostly unchanged in my case) JS file.

By this point, we’ve gotten through the tough parts of our build file. The last three steps are copying files from src into dist. popup.js is already there thanks to webpack, so we’re sending over popup.html, manifest.json, and our images. If you’re not familiar with the bash syntax, we’re using the cp (copy) command, which takes at least two arguments: the original location of the file and the destination of the copy. To copy an entire directory, we add the -a flag and put a /. at the end of the path to indicate “everything inside this directory.”

This post was a really good exercise because it helped me parse out what was really necessary for my task at hand. I was learning the process from a much much more intricate extension and at first I wasn’t sure how certain parts of the build or webpack script were being used. As I forced myself to define them, I came to realize that I could completely delete them without changing my end result (testing each step of the way, of course). This extension has reminded me that it’s not always easy to build something simple because so often we can only find complex examples that incorporate concepts we don’t need. We find value from those examples, however, because they teach us why we might want to use certain features, or why our application will break without them.


Solutions Engineer