Thoughts on atomic commits and quality of life

Reading up on best practices regarding the shape of commits and their messages is something most software developers have done. It’s not hard to realize that there’s a consensus, and that there’s plenty of valid arguments supporting it. This consensus usually boils down to:

  • Correctly formatting commits: using an editor instead of the -m flag, never exceeding the maximum length, using the imperative because it’s shorter and clearer, using the description, and making sure to maximize the expressiveness and meaning of the message.
  • Agreeing on, and adopting a branch workflow that suits the nature of the product and the development team. This usually entails making sure the master branch always reflects a stable production state, and using a development branch as a staging environment.
  • Never rewriting history on the master and development branches.
  • Committing often and keeping commits small.

Working on a project of medium-to-big size with awesome colleagues has turned blind belief in the aforementioned best practices into defendable reasoning. In this post, I’ll try to share some of the confirming experiences and observations that have helped me in the process. Hopefully this will inspire more experienced developers to share their ideas, too.

Table of contents

Git Built-ins work better if your commits are thought-through

There are several Git tools which benefit from a clean and sound commit history. In fact, they are kind of useless if your commit history is not good. The most important are bisect, revert, blame and log. I won’t go about explaining them - it takes basic search engine usage skills to find explanations much better than those I could dish out myself. But I will try to argue why you can make better use of them if your commits are neatly organized.

git bisect

The wonderful lifesaver git bisect relies on the assumption that every one of your commits is a single logical unit of change. It’s not of any help if you “successfully” find a commit with bisect which supposedly introduced a bug if the commit actually does several things; even worse if it does anything that’s not stated in the commit’s message or description.

You might be thinking: Gee! I’ve never used bisect anyway, nor have I ever needed it. Fear not, the day will come, and you will regret not having separated your commits properly.

Or maybe somebody else will, and that would be even worse, because the person bisecting will not be happy about the state of the repository, and maybe you won’t even learn about the whole situation.

git blame

This is not particularly about the whodunnit question which git blame answers. It’s more about the commit messages and descriptions you can see when using it.

Here’s an example situation which hopefully better illustrates the case for git blame:

  1. You want to understand a piece of code that you introduced some time ago, but it was modified. It’s a method called addOilToPasta, and some evil person (maybe you) seems to have introduced a side effect. Yikes. You fire up git blame in hopes of finding a clue.
  2. Just like you suspected, you remember your commit message that first added the method. It reads Implement adding oil on the Pasta component as a class method. But the three last lines, which add the side effect, were added by another commit.
  3. That commit reads Style the header in the Pasta view.

Now you have two choices: read the whole commit and find out which part of it actually has anything to do with the faulty code, or go and ask that person directly (hopefully, it’s not you). Wouldn’t it have been much nicer if there were another commit, separated from the Style the header in the Pasta view one, that read Implement adding pepper functionality on the Pasta component? Maybe with a proper description you’d even have all the answers you need already.

git revert

Pretty obvious - if your commits are nicely separated and distinctive, then using git revert might be an option. If they aren’t, then you’ll need to resort to adding a new commit unrelated to the culprit, or rewriting history.

git log

This one should be pretty straightforward too. The quality of your commit messages and their structure and separation translates directly to a better log, and happier teammates.

Thinking of how to keep your commits small and clean helps write better code

Just like test-driven development helps with planning ahead and makes you feel certain about what you’re doing and what’s next, trying to figure out what your pull request is going to look like when it’s ready is also beneficial.

Planning your tests and commits before you start to code is a way to avoid tunnel vision, auto-piloting and silly mistakes. Thinking of the logical units of which a feature or bug-fix should be composed is a great first step towards solving a problem.

Keeping tests together with their relevant code whenever possible helps

Although this just goes well with the idea that a commit should be atomic, and that its changes should conform a logical unit.

This, however, introduces a question that’s interesting and (as far as I understand) difficult to answer. The case for functionality-related changes and their corresponding tests living together in the same commit is easy to defend. It’s easy to say “keep your commits atomic” but you can mold the definition of atomic to your preferences like chewing gum.

For example, say you’re working on a task about adding a button to a view that deletes some resource, and confirms deletion with the user using a modal with a delete button and a cancel button. This is a React + Redux application, so to complete this, you need to do several things:

  • Add some tests before or after each of the following steps.
  • Make sure that the reducer you’ll be using responds to the events related to the deletion of that resource. If they’re not there, implement them.
  • Implement functionality to communicate with your API to actually delete the resource.
  • Add the button and possibly style it.

What’s the right way to structure these changes into commits? The are many plausible answers, but ultimately, I think you and your team should agree on one answer. Depending on how your application is set up, doing all that stuff might be no more than a few lines of code changes. Then maybe it makes sense to keep it all in one commit.

Maybe, if you needed to refactor something to make any of the steps happen, then it makes sense to put that refactor in a different commit.

It’s hard to go wrong if you actually bring up these questions with your team and figure out a solution together. Hopefully, by now I’ve managed to convince you that it’s worth dedicating time to this.

CI and notification set-ups benefit from commits that build and pass tests

Checking for a successful build and regressions after every commit by adding appropriate tests helps if you want to check out the project at any particular commit hash or (God forbid) roll back a deployment.

If you’ve configured, for example, Slack notifications from your CI system or from GitHub, you’ll find that their usefulness increases largely if your commits are correctly formatted and expressive. A build failure message that includes commits with their descriptions is only as useful as the messages and descriptions.

Peers will not know what your code does, and eventually, neither will you

Ultimately, you can only write a good message and a good description for your commit if its contents conform a single logical unit. Your commits can become your documentation, and navigating your repository’s history can be an extremely powerful tool.

If you haven’t learned how to take advantage of some of the more advanced Git tools, maybe somebody else or you, in the future, will. And they will be extremely glad that the commits they’re navigating look great and make sense.

You can integrate best practices with a comfortable personal workflow

I usually start writing changes and commit as soon as I find I’ve done anything significant. I like to keep my first commits as small as possible, even if that doesn’t make a lot of sense at the start. This is a safe way to go: it allows me to eventually squash everything into bigger but more meaningful commits without the hassle of resetting. Sometimes, if the original commits I created are good enough, I’ll leave it at that.

The point is, your feature branch does not need to come out perfect from the moment of its creation: create a lot of mini-commits at the start. You can package them up properly later with an interactive rebase or by amending. But do think right from the start of how you want the final product to look.

Knowing git in and out will obviously help. I’ve used gitsh for a long time but since I moved from Vim to Emacs I’ve been using Magit. The thing is just amazing and I’ve never used git so quickly and freely. It was one of my reasons to move to Emacs, too.

I hope this has given you new ideas, reminded you of old ones, or maybe at least motivated you to bring up these topics with your team. I’m conscious some of the points I’ve tried to make are debatable. If you disagree with anything, please leave a comment because I’d love to benefit from the discussion. Thanks for making it until the end of the rant, and see you next time!

Making Emacs work like my Neovim setup

Table of contents

  1. Package management: from vim-plug to Package.el and use-package.
  2. Vim things and Evil things: experiences using Evil mode.
  3. Project management and file navigation: from fzf to Helm and Projectile.
  4. Specific packages: a small teaser on alternatives for popular Vim packages.
  5. Theming: everybody wants some eye-candy.
  6. Performance and server mode: naïve comparison of how Neovim and Emacs feel differently performance-wise.
  7. Conclusion and fare-thee-wells.

My configuration repositories

Do not expect extremely polished dotfiles. I know some of you will be pulling your hair out with some of the stuff you see here:

I will not give you counsel, saying do this, or do that. For not in doing or contriving, nor in choosing between this course and another, can I avail; but only in knowing what was and is, and in part also what shall be.


I’ve been a Neovim user and fan for a bit more than a year now. After having given it a reasonable spin I’ve become quite efficient at working with it, and it’s been a pleasure all the way through. Certainly, I’m a lot faster with my Tmux/Neovim/gitsh workspace than I was with either Atom, Sublime Text or VSCode, and I feel a lot more comfortable.

From this point and forward, and although I use Neovim, I’ll be using the words Vim and Neovim interchangeably. Whether I refer to the software packages or to a specific user community should be clear in context.

During the last weeks I’ve noticed several tools and concepts in the Emacs which I’ve found attractive enough to try out the platform. These include:

  • Org-mode: I’ve tried the Vim port and although it’s a wonderful effort at emulating the original Emacs package, I think it would require quite a bit of an investment to reach the current scope of Org-mode. I plan to use Org-mode for GTD and for generic notetaking; also being able to write my Emacs configuration in Org-mode is a beauty.
  • Magit: with my Tmux setup, I initialize several workspaces for each project with a script, and my standard workspace includes a Vim window, and another window with several panes. One of these is always a gitsh instance. It’s worked wonderfully for me but after having tried the Magit interface there’s no question that I’m going to be needing less keystrokes to do my thing, all while enjoying a beautiful interface.
  • Lisp: admittedly, I could do with Vim but Emacs has a Lisp interpreter at its core, and integration is granted. I don’t use Lisp at work and I’m a beginner, but it feels like it’s impossible to find anything about Lisp support in Vim where the Emacs solutions are not mentioned.
  • Integration: I like the never leave your editor and kitchen sink in Emacs approach and although I doubt I’ll ever manage emails or browse the web inside Emacs, I feel all warm and fuzzy when I realize I could if I wanted to. Many of these things are arguably possible in Vim but it feels like the Emacs community leans more towards it than the Vim counterpart.

So I decided to surrender to my sacrilegous self and try to emulate everything I do with Vim from an empty Emacs config file built with Org-mode. And I must say: it’s been a breeze! I haven’t even needed to dedicate much time to learning actual Emacs, and what I’ve learned has actually been nice. In this post I’ll try to go through what I did to rebuild my setup; I hope you’ll enjoy it as much as I did.

Package management

For package management needs the Vim community has contributed several awesome packages like Pathogen or vim-plug among the many worth mentioning. I’ve always used vim-plug and never found a problem with it. As active as the Emacs community is in regards to package development, I expected a solution that would provide the same level of comfort.

Emacs comes bundled with Package, and this is as much as I’m aware of: it takes care of package repository management, and to configure it I only needed to add the links to those repositories and initialize it.

Package, however, does not take responsibility for automatic fetching, updates, and encapsulation of configuration (which vim-plug does, and very well). For this, I’ve found the de-facto solution to be use-package. To be able to work with use-package using its minimal functionality, this is all you need to know:

  • use-package can fetch whatever packages are made available through your Package configuration.
  • A basic declaration looks like this: (use-package package-name).
  • If you add :ensure t, you’ll get automatic fetching of your package and startup checks: (use-package package-name :ensure t).
  • If you add :defer t, your package will load lazily: (use-package package-name :ensure t :defer t).
  • You can add :init, and everything you pass it will be evaluated before the package loads. Here’s where you’ll use (setq key 'value), for example.
  • You can add :config, and everything you pass it will be evaluated after the package loads. Here’s where you’ll initialize modes, for example.

It didn’t take me too long to learn this, and use-package allegedly does a thousand more things which I’ll begin to learn with time.

Vim things and Evil things

Evil calls itself the extensible vi layer for Emacs, and claims that it emulates the main features of Vim. I’d say this is an understatement; Evil feels like a complete re-implementation of Vim’s porcelain. It makes you feel right at home once you start using it:

  • Macros: these work exactly as expected. Even making a visual selection and running :norm @q runs your q macro on the visual selection, just like in Vim. The only difference I’ve noticed is that execution is minimally slower, but the decrease in speed does not compare to that of VSCode’s implementation of Vim macros, for example.
  • Registers: registers also work exactly as expected. The only problem I’ve had is that I can’t copy to the clipboard by using the + register, but this must be a misconfiguration on my part for Emac’s clipboard integration, so I suspect it won’t be a huge effort to fix it.
  • Command repetition (.): works as expected, except for some actions introduced by other packages. One of these, unfortunately, is evil-surround. Here’s the related issue.
  • Auto-save and safety/backup features: they can be easily configured to not happen at all or to happen in a specified directory (I’m using /tmp).
  • Ex commands (those starting with a colon :) like substitution, substitution with manual confirmation, invocation of macros in normal mode, etc. All work great and I haven’t found an instance where they don’t.
  • Marks: I don’t make extensive use of them, but they also seem to be working great.

Using evil-leader you can configure a leader key. I’ve configured mine to Space, and added a several keybindings. The same results can be achieved with the more powerful general.el, and if you need chained keystrokes to produce a command (for example, I used to have <leader> wq, which I found faster than :wq), you can use Hydra. I haven’t found a need for these and I’m doing just fine with evil-leader.

Project management and file navigation

My setup using Vim is basically fzf (which I use for many more things outside Vim) powered by Ag (or The Silver Searcher) for finding files and ripgrep for finding text in a project. This works flawlessly.

I’ve found the combination of Helm and Projectile to be an adequate substitute to my former setup. On big projects like Servo, the difference in speed is noticeable (in favor of the Vim configuration) but I can live with that. I don’t know why, but there’s a longer load time on the Emacs setup.

The scope of fzf is by no means comparable to that of Helm and Projectile, so this is not meant to be a comparison but it does happen to be what covers my file-finding needs. Both setups enable extremely quick fuzzy search for files and content.

As you can see on my Emacs configuration, my setup for Helm and Projectile is extremely basic and I haven’t needed further customization yet. And I must say: they look much prettier than the Vim setup I use.

Specific packages

A quick search on your favorite engine will yield at least a couple different solutions to problems some of the nicest Vim plugins solve. Here’s a quick list to encourage you:

  • VimCompletesMe: I enjoyed the simplicity of VimCompletesMe, which basically only extends Vim’s autocomplete features and lets you use them by pressing Tab. I found that the Emacs package auto-complete provides the same ease of use and also feels lightweight.
  • vim-tmux-navigator: in Tmux, I use <my-tmux-prefix>-[hjkl] to navigate panes. Using Vim, I wanted windows to behave as if they were on the same level as Tmux panes, and vim-tmux-navigator works great for that. For Emacs there’s a port called emacs-tmux-navigator.
  • auto-pairs: Emacs has a built-in mode that suits my needs. Enable it with (electric-pair-mode 1).
  • NerdTree: the Emacs port NeoTree does the original justice and, although I haven’t gotten there yet, it can also be extended with Git integration and icons if you use GUI Emacs.
  • vim-emoji-complete: I use this to navigate and autocomplete through a list of Unicode emojis. In the company I work at, we use Gitmojis extensively, so this is actually an important part of my workflow. You should check them out too, it may seem silly but it’s quite helpful to be able to recognize what every commit does without even reading the message. For Emacs, there’s an even better solution for inserting emojis into your buffer: emojify. This thing even lets you customize the list of emojis you get. For example, I’ve chosen to only display Unicode emojis, and not GitHub or vanilla ASCII emojis.

Regarding Tim Pope plugins: there’s an Emacs port for everything Mr. Pope does. Many of these go on top of Evil, and it’s a no-brainer to add them and use them if you’re used to their Vim counterpart.


Themes are really easy to set up on Emacs. Just add a use-package declaration and then load it with (load-theme 'pretty-theme t). The second argument automatically answers “yes” to a couple security questions that pop up every time you load a new theme. Emacs themes can run arbitrary Elisp so they can do a lot of nasty stuff. Make sure you trust the sources where you get your themes.

If I had to complain about anything, I’d say most themes work much better on the GUI version of Emacs, and I use the terminal version (emacs -nw). Many themes’ backgrounds are broken and show up differently depending on your $TERM environment variable. Of the ones I’ve tried, I’ve found Monokai and Badger to work look best on terminal Emacs.

Performance and server mode

Neovim feels a lot snappier for a lot of interactions. This, however, is not important at all most of the time, because it never shows while writing or navigating text inside a buffer.

The main difference in performance shows in startup time. Here’s a quick-and-dirty comparison using time, with my full configuration loaded on both programs:

time nvim +q
nvim +q  0.13s user 0.02s system 97% cpu 0.160 total
➜  time em +q
emacs -nw +q  2.14s user 0.12s system 44% cpu 5.121 total

Please do not evaluate this as any kind of benchmark: I haven’t done anything to improve startup time on either Neovim or Emacs (like using use-package’s :defer t).

The two seconds of waiting is OK if you open Emacs once and work from there for each project. It is not OK if you’re using Emacs as a default editor for stuff like Git, or even your $EDITOR environment variable.

Emac’s solution to this is server mode. Basically, you start an Emacs server on your fully loaded instance (the one that took two seconds to open). From then on, if you want to open Emacs for a quick edit and you don’t need the default directory to be the one you called Emacs on, you can go emacsclient.

➜  time emacsclient -nw -c -a "" +q
emacsclient -nw -c -a "" +q  0.00s user 0.00s system 0% cpu 3.010 total

Yep - instant! That’s more like it. I have that gravely arcane command (emacsclient -nw -c -a "") set as my $EDITOR environment variable. Also, I have two aliases:

  • em opens a full Emacs instance.
  • e is used to manually call emacsclient -nw -c -a "", which is also my $EDITOR.

This is admittedly a lot of work compared to just having an editor that loads quickly all the time. But it works! You can see the section of my config file where I set up server mode (basically, there’s no setup).


Voilà! Now I can continue Vimming around. I can Vim around while writing Lisp comfortably, doing some GTD in Org-mode, using Magit, and having leveled up in snobbism 😭.

Jokes aside, it feels good to have given both editors a chance. I have certainly had a taste of why both communities are so passionate about their preferences. I’ll make another post as soon as I’ve discovered if I can actually use my new setup as fluently as my former configuration. Until then, happy new year!

My experience contributing to Servo

Some months ago a colleague introduced me to Rust and to the Servo project. It’s a web browser engine led by Mozilla, and its code is available on GitHub and is open to contributions.

Working on Servo was attractive to me from the start for several reasons:

  • It’s written in Rust, and Rust has an exciting community, it’s low level and modern, it’s not shackled in any way by existing legacy code nor by industry requirements, and it just seemed like the perfect thing to help me quench my learning thirst.
  • Servo is a web browser engine, and working for a web browser engine as a web developer feels like working on the biggest possible project, on the foundation on which everything I’d ever done took place.
  • Servo has an enormous amount of issues you’re welcome to take. Some of these may seem extremely cryptic and complicated for someone new, but others are extremely trivial. You’ll be sure to find the full gradient of difficulties in Servo issues, and there’s even Servo Starters, which uses GitHub labels to show issues that are Good first PRs.

Let’s talk about what actually working on the project feels like.

Finding an issue

First, you’ll need to find an issue you find interesting. Helpful labels are E-easy, Good first PR, and C-assigned. You can filter for issues that haven’t been assigned to anyone yet by searching with a negative prefix: -label:C-assigned. Here’s a good filter to start with:

is:issue is:open label:E-easy -label:C-assigned

Do not be discouraged by issues which don’t have the E-easy label on them. In my experience, an E-easy task could end up being a bit more complicated anyway: the line between easy and difficult is blurry and you’ll only find out what you’re facing once you start working on it.

Also, don’t forget Servo Starters.

Working on your task

Once you’ve found the issue you want to work on, make sure to leave a comment saying you’re working on it so there’s no two people implementing the same thing separately.

Compiling Servo is understandably slow: it’s a huge project. Your first run will likely take between thirty minutes and one hour depending on your machine and connection, and after some re-runs and testing you’ll find yourself with a 15GB directory.

Servo has its own tool, mach, which you can use to build for development (./mach build -d), for release (./mach build -r), and to do many other things. When you submit your pull request, it must pass some CI tests which you can try locally to save time:

If all of those pass, you’re almost safe to think it will pass the first CI tests. Check out the complete .travis.yml file to see the rest. Of course, even if your changes don’t pass the tests, you can submit your pull request and expect help from the Servo project members.

Submitting a pull request

What you need to do to submit a pull request is carefully explained in Servo’s Wiki Page about GitHub workflow. Once you submit your pull request, you’ll be promptly greeted by a dog. You won’t need to talk to bors-servo but Servo organization members can request CI retries by mentioning @bors-servo.

The review process and regression tests

If needed, you’re guaranteed to receive extensive help from the project maintainers. In fact, in some cases, I’m quite sure any of the Servo organization members could have solved the problem I was facing with less effort than it took to help me. Here’s some live proof. The help I receive when working on Servo makes for an invaluable learning opportunity and it makes contributing to the project all the more enjoyable. Do not be afraid to ask any doubts, and always do your research on the topic: if you study it enough, you’ll be able to discuss with others and learn even more.

Once your pull request is all green, you can request review by commenting r?. The time until somebody reviews your PR ranges between minutes and a day or two. Somebody will automatically be assigned to the pull request depending on the code area you’re working on. Servo organization members can request a full CI run by commenting @bors-servo try. This will trigger the rest of the CI suites, and the most important one is the Web Platform Tests. It’s a regression test suite used for Firefox a cross-browser regression test suite run for Firefox and (at least in part) by the Chrome, Edge and Safari teams (thanks for the correction jgraham). Many of the tests that run on the suite for Servo come directly from the WPT, but you can also write your own new tests, modify existing ones or modify the expectations for existing test results. Many tests are expected to fail for Servo, and you can also submit a pull request to fix those failures. Of my four pull requests to Servo, two of them have caused failures on the WPT suite and most of the work related to the issue went on fixing and improving the tests.

Running the full test suite takes a long time on CI. Usually around one hour. If a regression test fails live and you’re working on a fix, you can always run the test locally to avoid running the full suite online again. First, make a development build with ./mach build -d and then run the specific test with ./mach test-wpt [path-to-test]. Unexpected test results, such as PASS, expected FAIL will also make the CI suite fail: you’ll need to update the test expectations by modifying the corresponding .ini file. On the guide I linked above there’s all the information you need on how to work with the WPT suite.

Resources to help you figure out how to solve an issue

Of course, this is highly dependent on what type of issue you picked. But in general, the most important resource is documentation. Reading the HTML Living Standard is a great way to help you find the right way to solve a problem. Usually, all the problems are already solved and their solution is in the spec. You’re just writing a different explanation of their solutions… in Rust. Quoting Kevlin Henney:

The act of describing a program in unambiguous detail and the act of programming are one and the same.

You can also find a huge amount of valuable information in the Mozilla Developer Network. Pretty much the same as the spec, but with a lot of examples, and a lot more verbosity. Do not make the same mistake I did and skim through the examples trying to find the exact line that will solve your problem: reading everything thoroughly is what will help you understand the issue best.

If armed with a powerful enough tool, you’ll be able to document your way out of your issue by just reading the source code and searching inside of it. I know all editors have project search functionality but this is a huge project. As of commit b1d7b6bfcf, the Servo repository has a whopping 6,517,647 lines of code (that’s six and a half million). I used loc to count those. So use something fast like ripgrep: it’ll make your life a lot easier.

In the end, you’ll receive the most help from the project maintainers: just ask them. It’s an extremely fun process.

If you think I’ve missed a great resource, please comment below and I’ll be sure to include it in this post, and use it on my own as well.


Working on Servo is one of my main sources of learning nowadays and I’ll keep on trying to find issues I can tackle. The tasks I’ve carried out for now have ranged from a two line change dependency removal to properly setting the origin of fetch requests. The fetch API issue took me almost three months to get merged, mostly because of my lack of understanding of the project. But the project maintainers proved to be exceptionally helpful and pleasant to work with. They also never tried to rush a solution and always “followed” (and this should read “accommodated to” or “slowed down to”) my pace.

I encourage everyone reading this to check out the project and consider contributing to it. The time you spend working on a project like this is extremely valuable to you as a web developer, and in the end you can feel proud of helping build something on which your applications and websites will probably run in the future.

Why won't my text overflow? Where's my ellipsis!?

The text-overflow property is a PITA to deal with because on its own, it won’t force text to overflow. Say what? Its name IS text overflow. Anyway, what it actually does is to define the behavior of a text node if it overflows. The job of actually making it overflow is yours and only yours.

I present to you a checklist that should once and for all truncate that stubborn span and print some beautiful ellipses at the end of your one-liners.

In the checklist, I’ll be using the word “container”. So let’s define it first, just for this post: the “container” I’m talking about is the immmediate parent of the element that contains the text node you want to truncate. For example:

<div> <!-- This div is the container -->
  <span>I will NOT truncate. Nononononono, no!</span> <!-- Let's call this one "stubborn child" -->

Is your container NOT a flex container? Checklist A is your friend. Is your container a flex container? Checklist B is for you.

Checklist A: for non-flex containers

  1. Is the container a block-level element?
  2. Does the container have a computed size determined by your CSS, or does it inherit one?
  3. Did you add the white-space: nowrap; property to the container?
  4. Did you add the overflow: hidden; property to the container?
  5. Did you add the text-overflow: ellipsis; property to the container? (duh)

Checklist B: for flex containers

  1. Does the container have a computed size determined by your CSS, or does it inherit one?
  2. Does the child also have a computed size determined by your CSS, or does it inherit one?
  3. Did you add the white-space: nowrap; property to the child?
  4. Did you add the overflow: hidden; property to the child?
  5. Did you add the text-overflow: ellipsis; property to the child? (duh)

Hope that helped. Here’s some nice documentation from MDN about the text-overflow property which is well worth the read.

CSS features that Firefox supports but Chrome doesn't

This is a short list of CSS features that work on Firefox but not yet on Chrome. In particular, features that would be really cool to use in production if the other major browsers supported them. Maybe you didn’t know about some of these. Hopefully, I’ll get you informed on them. I’ll update this when the glorious day comes in which we are able to use these without polyfills or any other kind of external libraries.

Scroll snap points

Scroll snap points, in case you haven’t heard of them yet, are a way of introducing precision while scrolling. They’re especially useful for touch devices. Imagine a gallery of images arranged horizontally: a user on a tablet might swipe upwards to continue towards the next image, but the browser scrolls wildly towards the southern ranges of the website. With scroll snap points, we can tell the browser to gracefully stop scrolling in certain points of our document.

There are some cool libraries which implement this with cross-browser compatibility, such as the ever-famous pagePiling.js (which does a lot more things than just scroll snapping), but the native CSS property doesn’t work on Chrome.

CSS scroll snap points work for the horizontal axis as well as for the vertical axis. I’m going to give you an example of a vertical scrolling container with scroll snap points. Hop to Firefox if you’re not there already.

This will only work on Firefox

For a great guide on how to use this, refer to CSS Tricks. The basic syntax you have to use for this is as follows:

.container {
  scroll-snap-type: mandatory;
  scroll-snap-points-y:repeat(px, vh, vw, percentage);
  scroll-snap-destination: x y;


Next up in our CSS features that don’t work on Chrome is hyphenation. Check out this piece of lorem. The upper picture was taken on Chrome, and the lower one was taken on Firefox. They both have hyphenation active, but obviously it only works on Firefox.

Hyphenation in Chrome vs Firefox

The hyphens property is tied to the language attribute you give your HTML, so be sure to use the correct language. Here is the different syntax options you can use in more detail:

.element-that-contains-text {
  hyphens: none;
  hyphens: manual;
  hyphens: auto;
  /* And as usual... */
  hyphens: inherit;
  hyphens: initial;
  hyphens: unset;

Normally you’d use auto. But the manual option is quite interesting (although not very practical, I think). On manual you’d be able to suggest a line break. This can be done in two different ways: by typing a hyphen (-), which suggests the line break but prints the hyphen even if there’s not going to be a line break, or by adding a soft hyphen (U+00AD). A soft hyphen won’t print but will break the line if able to.

I have recently been informed by a tweet by Michael Scharnagl of a plan to introduce hyphens to Blink, which is interesting. Let’s hope it comes to Chrome soon.

The element function

This feature is awesome! Its effect is a little bit difficult to pick up on. Here’s my attempt at explaining it. The CSS background property accepts several different values, such as colors and image URLs. In Firefox, you can also use the element() function as a value for the background property. Let’s set up two div elements with some simple markup:

<div id='element-on-the-left'>
  <!-- whatever content here, this will be
  the source for the element() function -->

<div id='element-on-the-right'>
  <!-- here we will print a background
  with element() and it will be awesome -->

Next, we will use this markup to generate a live image of the element on the left. This image will be then used as a background for the element on the right, and the markup goes like this:

#element-on-the-right {
  /* set whatever size here and... */
  background: -moz-element(#element-on-the-left);

The whole thing will end up looking similar to this (here’s a link to which obviously only works on Firefox):

element() CSS function demo

Now, at first I thought this was cool, but not extremely useful. Until I saw this post by Vincent de Oliveira. His ideas using this feature are extensive and really nice! Go check it out.

Sticky positioning

Update: This has been added to Chrome in version 56. Hurray!

No idea how this one went past my sight for the original list. Thanks to reddit user Graftak9000, Geoffrey Crofte and Nathan here in the comments for pointing this one out.

Scroll down. This will work on Firefox and (now) Chrome 56+

This sticks to the specified 'top' value without any JS.

This property value solves the issue that many modern websites using as sticky header have. Nowadays, cross-browser implementations of a sticky header (or sidebar, or whatever) effect include JavaScript in one way or another. Elements with this property will behave as a relative positioned element until it reaches a threshold specified by its top property, at which point it will behave as a fixed position element. I guess the demo up there is self-explanatory. Here’s the syntax for sticky elements:

.element {
  position: sticky;
  top: /* Your value */;

Recent versions of Safari also support this but don’t work properly when the parent has overflow: auto; specified. My demo will not work on Safari because it doesn’t include the -webkit- vendor prefix and because it uses said overflow property. Support on Chrome was enabled with a flag on versions 23 through 26 but it was dropped later on. A new implementation is in development, though, so hurray!