When a Click is Not Just a Click

The click event is quite simple and easy to use; you listen for the event and run code when the event is fired. It works on just about every HTML element there is, a core feature of the DOM API.

As often the case with the DOM and JavaScript, there are nuances to consider. Some nuances with the click event are typically not much a concern. They are minor and probably most people would never even notice them in the majority of use cases.

Take, for example, the click event listening to the grandfather of interactive elements, the <button> element. There are nuances associated with button clicks and these nuances, like the difference between a “click” from a mouse pointer and “click” from the keyboard. Seen this way, a click is not always a “click” the way it’s typically defined. I actually have run into situations (though not many) where distinguishing between those two types of clicks comes in handy.

How do we distinguish between different types of clicks? That’s what we’re diving into!

First things first

The <button> element, as described by MDN, is simply:

The HTML element represents a clickable button, used to submit forms or anywhere in a document for accessible, standard button functionality. By default, HTML buttons are presented in a style resembling the platform the user agent runs on, but you can change buttons’ appearance with CSS.

The part we’ll cover is obviously the “anywhere in a document for accessible, standard button functionality” part of that description. As you may know, a button element can have native functionality within a form, for example it can submit a form in some situations. We are only really concerning ourselves over the basic clicking function of the element. So consider just a simple button placed on the page for specific functionality when someone interacts with it.

Consider that I said “interacts with it” instead of just clicking it. For historical and usability reasons, one can “click” the button by putting focus on it with tabbing and then using the Space or Enter key on the keyboard. This is a bit of overlap with keyboard navigation and accessibility; this native feature existed way before accessibility was a concern. Yet the legacy feature does help a great deal with accessibility for obvious reasons.

In the example above, you can click the button and its text label will change. After a moment the original text will reset. You can also click somewhere else within the pen, tab to put focus on the button, and then use Space or Enter to “click” it. The same text appears and resets as well. There is no JavaScript to handle the keyboard functionality; it’s a native feature of the browser. Fundamentally, in this example the button is only aware of the click event, but not how it happened.

One interesting difference to consider is the behavior of a button across different browsers, especially the way it is styled. The buttons in these examples are set to shift colors on its active state; so you click it and it turns purple. Consider this image that shows the states when interacting with the keyboard.

Keyboard Interaction States

The first is the static state, the second is when the button has focus from a keyboard tabbing onto it, the third is the keyboard interaction, and the fourth is the result of the interaction. With Firefox you will only see the first two and last states; when interacting with either Enter or Space keys to “click” it you do not see the third state. It stays with the second, or “focused”, state during the interaction and then shifts to the last one. The text changes as expected but the colors do not. Chrome gives us a bit more as you’ll see the first two states the same as Firefox. If you use the Space key to “click” the button you’ll see the third state with the color change and then the last. Interestingly enough, with Chrome if you use Enter to interact with the button you won’t see the third state with the color change, much like Firefox. In case you are curious, Safari behaves the same as Chrome.

The code for the event listener is quite simple:

const button = document.querySelector('#button'); button.addEventListener('click', () => { button.innerText = 'Button Clicked!'; window.setTimeout(() => { button.innerText = '"click" me'; }, 2000);
});

Now, let’s consider something here with this code. What if you found yourself in a situation where you wanted to know what caused the “click” to happen? The click event is usually tied to a pointer device, typically the mouse, and yet here the Space or Enter key are triggering the same event. Other form elements have similar functionality depending on context, but any elements that are not interactive by default would require an additional keyboard event to work. The button element doesn’t require this additional event listener.

I won’t go too far into reasons for wanting to know what triggered the click event. I can say that I have occasionally ran into situations where it was helpful to know. Sometimes for styling reasons, sometimes accessibility, and sometimes for specific functionality. Often different context or situations provide for different reasons.

Consider the following not as The Way™ but more of an exploration of these nuances we’re talking about. We’ll explore handling the various ways to interact with a button element, the events generated, and leveraging specific features of these events. Hopefully the following examples can provide some helpful information from the events; or possibly spread out to other HTML elements, as needed.

Which is which?

One simple way to know a keyboard versus mouse click event is leveraging the keyup and mouseup events, taking the click event out of the equation.

Now, when you use the mouse or the keyboard, the changed text reflects which event is which. The keyboard version will even inform you of a Space versus Enter key being used.

Here’s the new code:

const button = document.querySelector('#button'); function reset () { window.setTimeout(() => { button.innerText = '"click" me'; }, 2000);
} button.addEventListener('mouseup', (e) => { if (e.button === 0) { button.innerText = 'MouseUp Event!'; reset(); }
}); button.addEventListener('keyup', (e) => { if (e.code === 'Space' || e.code === 'Enter') { button.innerText = `KeyUp Event: ${e.code}`; reset(); }
});

A bit verbose, true, but we’ll get to a slight refactor in a bit. This example gets the point across about a nuance that needs to be handled. The mouseup and keyup events have their own features to account for in this situation.

With the mouseup event, about every button on the mouse could trigger this event. We usually wouldn’t want the right mouse button triggering a “click” event on the button, for instance. So we look for the e.button with the value of 0 to identify the primary mouse button. That way it works the same as with the click event yet we know for a fact it was the mouse.

With the keyup event, the same thing happens where about every key on the keyboard will trigger this event. So we look at the event’s code property to wait for the Space or Enter key to be pressed. So now it works the same as the click event but we know the keyboard was used. We even know which of the two keys we’re expecting to work on the button.

Another take to determine which is which

While the previous example works, it seems like a bit too much code for such a simple concept. We really just want to know if the “click” came from a mouse or a keyboard. In most cases we probably wouldn’t care if the source of the click was either the Space or Enter keys. But, if we do care, we can take advantage of the keyup event properties to note which is which.

Buried in the various specifications about the click event (which leads us to the UI Events specification) there are certain properties assigned to the event concerning the mouse location, including properties such as screenX/screenY and clientX/clientY. Some browsers have more, but I want to focus on the screenX/screenY properties for the moment. These two properties essentially give you the X and Y coordinates of the mouse click in relation to the upper-left of the screen. The clientX/clientY properties do the same, but the origin is the upper-left of the browser’s viewport.

This trick relies on the fact that the click event provides these coordinates even though the event was triggered by the keyboard. When a button with the click event is “clicked” by the Space or Enter key it still needs to assign a value to those properties. Since there’s no mouse location to report, if it falls back to zero as the default.

Here’s our new code:

const button = document.querySelector('#button'); button.addEventListener('click', (e) => { button.innerText = e.screenX + e.screenY === 0 || e.offsetX + e.offsetY === 0 ? 'Keyboard Click Event!' : 'Mouse Click Event!'; window.setTimeout(() => { button.innerText = '"click" me'; }, 2000);
});

Back to just the click event, but this time we look for those properties to determine whether this is a keyboard or mouse “click.” We take both screenX and screenY properties, add them together, and see if they equal zero; which makes for an easy test. The possibilities of the button being in the immediate upper-left of the screen to be clicked has to be quite low. It could be possible if one attempted to make such an effort of a pixel-perfect click in such an odd location, but I would think it’s a safe assumption that it won’t happen under normal circumstances.

Now, one might notice the added e.offsetX + e.offsetY === 0 part. I have to explain that bit…

Enter the dreaded browser inconsistencies

While creating and testing this code, the all-too-often problem of cross-browser support reared its ugly head. It turns out that even though most browsers set the screenX and screenY values on a keyboard-caused click event to zero, Safari decides to be different. It applies a proper value to screenX and screenY as if the button was clicked by a mouse. This throws a wrench into my code which is one of the fun aspects of dealing with different browsers — they’re made by different groups of different people creating different outcomes to the same use cases.

But, alas, I needed a solution because I didn’t necessarily want to rely only on the keyup event for this version of the code. I mean, we could if we wanted to, so that’s still an option. It’s just that I liked the idea of treating this as a potential learning exercise to determine what’s happening and how to make adjustments for differences in browsers like we’re seeing here.

Testing what Safari is doing in this case, it appears to be using the offsetX and offsetY properties in the event to determine the location of the “click” and then applying math to determine the screenX and screenY values. That’s a huge over-simplification, but it sort of checks out. The offset properties will be the location of the click based on the upper-left of the button. In this context, Safari applies zero to offsetX and offsetY, which would obviously be seen as the upper-left of the button. From there it treats that location of the button as the determination for the screen properties based on the distance from the upper-left of the button to the upper-left of the screen.

The other usual browsers technically also apply zero to offestX and offsetY, and could be used in place of screenX and screenY. I chose not to go that route. It’s certainly possible to click a button that happens to be at the absolute top-left of the screen is rather difficult while clicking the top-left of a button. Yet, Safari is different so the tests against the screen and offsets is the result. The code, as written, hopes for zeroes on the screen properties and, if they are there, it moves forward assuming a keyboard-caused click. If the screen properties together are larger then zero, it checks the offset properties just in case. We can consider this the Safari check.

This is not ideal, but it wouldn’t be the first time I had to create branching logic due to browser inconsistencies.

In the hope that the behavior of these properties will not change in the future, we have a decent way to determine if a button’s click event happened by mouse or keyboard. Yet technology marches on providing us new features, new requirements, and new challenges to consider. The various devices available to us has started the concept of the “pointer” as a means to interact with elements on the screen. Currently, such a pointer could be a mouse, a pen, or a touch. This creates yet another nuance that we might want to be consider; determining the kind of pointer involved in the click.

Which one out of many?

Now is a good time to talk about Pointer Events. As described by MDN:

Much of today‘s web content assumes the user’s pointing device will be a mouse. However, since many devices support other types of pointing input devices, such as pen/stylus and touch surfaces, extensions to the existing pointing device event models are needed. Pointer events address that need.

So now let’s consider having a need for knowing what type of pointer was involved in clicking that button. Relying on just the click event doesn’t really provide this information. Chrome does have an interesting property in the click event, sourceCapabilities. This property in turn has a property named firesTouchEvents that is a boolean. This information isn’t always available since Firefox and Safari do not support this yet. Yet the pointer event is available much everywhere, even IE11 of all browsers.

This event can provide interesting data about touch or pen events. Things like pressure, contact size, tilt, and more. For our example here we’re just going to focus on pointerType, which tells us the device type that caused the event.

Clicking on the button will now tell you the pointer that was used. The code for this is quite simple:

const button = document.querySelector('#button'); button.addEventListener('pointerup', (e) => { button.innerText = `Pointer Event: ${e.pointerType}`; window.setTimeout(() => { button.innerText = '"click" me'; }, 2000);
});

Really, not that much different than the previous examples. We listen for the pointerup event on the button and output the event’s pointerType. The difference now is there is no event listener for a click event. So tabbing onto the button and using space or enter key does nothing. The click event still fires, but we’re not listening for it. At this point we only have code tied to the button that only responds to the pointer event.

That obviously leaves a gap in functionality, the keyboard interactivity, so we still need to include a click event. Since we’re already using the pointer event for the more traditional mouse click (and other pointer events) we have to lock down the click event. We need to only allow the keyboard itself to trigger the click event.

The code for this is similar to the “Which Is Which” example up above. The difference being we use pointerup instead of mouseup:

const button = document.querySelector('#button'); function reset () { window.setTimeout(() => { button.innerText = '"click" me'; }, 2000);
} button.addEventListener('pointerup', (e) => { button.innerText = `Pointer Event: ${e.pointerType}`; reset();
}); button.addEventListener('click', (e) => { if (e.screenX + e.screenY === 0 || e.offsetX + e.offsetY === 0) { button.innerText = 'Keyboard ||Click Event!'; reset(); }
});

Here we’re using the screenX + screenY (with the additional offset check) method to determine if the click was caused by the keyboard. This way a mouse click would be handled by the pointer event. If one wanted to know if the key used was space or enter, then the keyup example above could be used. Even then, the keyup event could be used instead of the click event depending on how you wanted to approach it.

Anoher take to determine which one out of many

In the ever-present need to refactor for cleaner code, we can try a different way to code this.

Yep, works the same as before. Now the code is:

const button = document.querySelector('#button'); function btn_handler (e) { if (e.type === 'click' && e.screenX + e.screenY > 0 && e.offsetX + e.offsetY > 0) { return false; } else if (e.pointerType) { button.innerText = `Pointer Event: ${e.pointerType}`; } else if (e.screenX + e.screenY === 0) { button.innerText = 'Keyboard Click Event!'; } else { button.innerText = 'Something clicked this?'; } window.setTimeout(() => { button.innerText = '"click" me'; }, 2000);
} button.addEventListener('pointerup', btn_handler);
button.addEventListener('click', btn_handler);

Another scaled down version to consider: this time we’ve reduced our code down to a single handler method that both pointerup and click events call. First we detect if the mouse “click” caused the event; if it does, we wish to ignore it in favor of the pointer event. This is checked with a test opposite of the keyboard test; is the sum of screenX and screenY larger than zero? This time there’s an alteration to the offset check by doing the same as the screen test, is the sum of those properties larger than zero as well?

Then the method checks for the pointer event, and upon finding that, it reports which pointer type occurred. Otherwise, the method checks for keyboard interactions and reports accordingly. If neither of those are the culprit, it just reports that something caused this code to run.

So here we have a decent number of examples on how to handle button interactions while reporting the source of those interactions. Yet, this is just one of the handful of form elements that we are so accustomed to using in projects. How does similar code work with other elements?

Checking checkboxes

Indeed, similar code does work very much the same way with checkboxes.

There are a few more nuances, as you might expect by now. The normal usage of <input type="checkbox"> is a related label element that is tied to the input via the for attribute. One major feature of this combination is that clicking on the label element will check the related checkbox.

Now, if we were to attach event listeners for the click event on both elements, we get back what should be obvious results, even if they are a bit strange. For example, we get one click event fired when clicking the checkbox. If we click the label, we get two click events fired instead. If we were to console.log the target of those events, we’ll see on the double event that one is for the label (which makes sense as we clicked it), but there’s a second event from the checkbox. Even though I know these should be the expected results, it is a bit strange because we’re expecting results from user interactions. Yet the results include interactions caused by the browser.

So, the next step is to look at what happens if we were to listen for pointerup, just like some of the previous examples, in the same scenarios. In that case, we don’t get two events when clicking on the label element. This also makes sense as we’re no longer listening for the click event that is being fired from the checkbox when the label is clicked.

There’s yet another scenario to consider. Remember that we have the option to put the checkbox inside the label element, which is common with custom-built checkboxes for styling purposes.

<label for="newsletter"> <input type="checkbox" /> Subscribe to my newsletter
</label>

In this case, we really only need to put an event listener on the label and not the checkbox itself. This reduces the number of event listeners involved, and yet we get the same results. Clicks events are fired as a single event for clicking on the label and two events if you click on the checkbox. The pointerup events do the same as before as well, single events if clicking on either element.

These are all things to consider when trying to mimic the behavior of the previous examples with the button element. Thankfully, there’s not too much to it. Here’s an example of seeing what type of interaction was done with a checkbox form element:

This example includes both types of checkbox scenarios mentioned above; the top line is a checkbox/label combination with the for attribute, and the bottom one is a checkbox inside the label. Clicking either one will output a message below them stating which type of interaction happened. So click on one with a mouse or use the keyboard to navigate to them and then interact with Space or Enter; just like the button examples, it should tell you which interaction type causes it.

To make things easier in terms of how many event listeners I needed, I wrapped the checkboxes with a container div that actually responds to the checkbox interactions. You wouldn’t necessarily have to do it this way, but it was a convenient way to do this for my needs. To me, the fun part is that the code from the last button example above just copied over to this example.

const checkbox_container = document.querySelector('#checkbox_container');
const checkbox_msg = document.querySelector('#checkbox_msg'); function chk_handler (e) { if (e.type === 'click' && e.screenX + e.screenY > 0 && e.offsetX + e.offsetY > 0) { return false; } else if (e.pointerType) { checkbox_msg.innerText = `Pointer Event: ${e.pointerType}`; } else if (e.screenX + e.screenY === 0) { checkbox_msg.innerText = 'Keyboard Click Event!'; } else { checkbox_msg.innerText = 'Something clicked this?'; } window.setTimeout(() => { checkbox_msg.innerText = 'waiting...'; }, 2000);
} checkbox_container.addEventListener('pointerup', chk_handler);
checkbox_container.addEventListener('click', chk_handler);

That means we could possibly have the same method being called from the the various elements that need the same detecting the pointer type functionality. Technically, we could put a button inside the checkbox container and it should still work the same. In the end it’s up to you how to implement such things based on the needs of the project.

Radioing your radio buttons

Thankfully, for radio button inputs, we can still use the same code with similar HTML structures. This mostly works the same because checkboxes and radio buttons are essentially created the same way—it’s just that radio buttons tend to come in groups tied together while checkboxes are individuals even in a group. As you’ll see in the following example, it works the same:

Again, same code attached to a similar container div to prevent having to do a number of event listeners for every related element.

When a nuance can be an opportunity

I felt that “nuance” was a good word choice because the things we covered here are not really “issues” with the typical negative connotation that word tends to have in programming circles. I always try to see such things as learning experiences or opportunities. How can I leverage things I know today to push a little further ahead, or maybe it’s time to explore outward into new things to solve problems I face. Hopefully, the examples above provide a somewhat different way to look at things depending on the needs of the project at hand.

We even found an opportunity to explore a browser inconsistency and find a workaround to that situation. Thankfully we don’t run into such things that much with today’s browsers, but I could tell you stories about what we went through when I first started web development.

Despite this article focusing more on form elements because of the click nuance they tend to have with keyboard interactions, some or all of this can be expanded into other elements. It all depends on the context of the situation. For example, I recall having to do multiple events on the same elements depending on the context many times; often for accessibility and keyboard navigation reasons. Have you built a custom <select> element to have a nicer design than the standard one, that also responds to keyboard navigation? You’ll see what I mean when you get there.

Just remember: a “click” today doesn’t always have to be what we think a click has always been.


The post When a Click is Not Just a Click appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.