As a front-end developer, I work with a lot of tools to develop code. There are a few popular tools that I use and recommend for programming better front-end applications. Also, I would like to describe what I do to manage applications and how to simplify the process. It can be a really difficult process when you are building a bigger front-end application. I will explain the tooling that I use in this article.
The first recommended tool is ESLint. Most of the newest JavaScript applications are using Typescript. With the combination of ESLint and Typescript, the code quality will improve. When you are using this tool, it is possible to enforce standards for your team, but also for yourself. On the other hand, it helps you to think about your code and will help you when you forget something. For example, if you forget a type definition for one of your methods.
You can start with ESLint by installing it via npm. After installing ESlint you will find a configuration file in the root of your folder directory called eslintrc.json. The recommended way is to use a config like this:
"extends": [
"eslint:recommended",
"plugin:prettier/recommended",
"plugin:@typescript-eslint/recommended"
],
I do not like to have a lot of custom configuration. This helps to prevent discussion within the team about specific rules. There are some rules that I do recommend. One of these is a simple rule to give an error for unused imports. To enable this rule, you will need a plugin.
"plugins": ["unused-imports"],
In your rule configuration can you define what you should do in case of unused imports.
"@typescript-eslint/no-unused-vars": ["error"],
"unused-imports/no-unused-imports": "error",
In addition to these rules, I have a few other rules. I always want to give a type definition for a return function and do not allow the use of the any type. You can read the complete documentation of ESlint on the website.
An example eslint configuration can be found here.
One of the biggest frustrations, when I start at an existing project, is having no code standards. You want to talk about the function of the new code when somebody has opened a pull request and not about its formatting. That is why code standards are important. Of course, both are important, but formatting should be consistent across files. That is pretty hard when you do not have rules or principles. We cannot automate everything, but some tools can help us. One of these tools is Prettier.
Prettier is a tool that you can use in your codebase to format your code. It supports a lot of programming languages. You do not need to discuss style in your code review and this will save you time and energy. After you install it with npm, you also have a simple configuration file called prettierrc.json. I use a configuration like this:
{
"printWidth": 140,
"tabWidth": 2,
"useTabs": false,
"singleQuote": true,
"bracketSpacing": true,
"jsxSingleQuote": true,
"trailingComma": "none"
}
In the above example, you can see a few options. I want to use a maximum of 140 characters per line, not use double quotes, and format with spaces instead of tabs. The complete list of the Prettier options can be found on their website.
An example Prettier configuration can be found here.
The next step would be automating the tools above. I prefer to do two different things. The first one for Prettier is to implement an auto-save functionality for your code editor. For example in Visual Studio Code, you can create a settings.json in a .vscode folder at the root of your codebase. You can set a default formatter and a format on save boolean for the whole project. The default formatter below can be installed as a plugin.
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
The other step is to implement a pre-commit hook before you commit definitively to your repository. I use the tool Husky for that part. Husky is a tool to lint your commit messages, run tests, and in my case run the lint rules above. Also, it is possible to combine those. I often use it for linting code and running unit tests.
You can install Husky via an automatic command. It will give you everything you need for your first pre-commit hook, including an example to run your test. You can also install Husky manually. The steps to do this can be found here. After installing you will find a .husky folder in the root of your codebase. Inside this folder, you will find the commit hooks that are used.
The latest step to prevent adding bad code in your codebase is to use the tool lint-staged. With this tool, it is possible to run linters over staged git files. By using this tool you can have control before you push code into the repository. To use lint-staged you need to install it via npm. After that part, you can configure which files are checked while using the lint-staged command. You can do this via a lint-staged part in your package.json or via .lintstagedrc.json file in the root of your directory. The recommended way is to use a config like this:
{
"*.{ts,tsx,html}": "eslint --cache --fix",
"*.{ts,tsx,html,scss}": "prettier --write"
}
The example above also includes a fix flag for eslint and a write flag for prettier. During the pre-commit phase, files are changed automatically when possible. To use lint-staged in your pre-commit hook, add a file like this:
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
npx lint-staged
npm run lint
It does not always stop everything, but it does help. To get control over your application, I can recommend running linters and tests to your pipeline. In that case, you know for sure that your code has been checked.
An example Lint-Staged / Husky configuration can be found here.
It can be difficult to produce quality code when the business is waiting for new functionality. To improve code quality, I use SonarQube. This tool helps you to find bugs, code smells, vulnerabilities, and duplications. It is available in a lot of languages including JavaScript, TypeScript, HTML, and CSS. For brevity, I will not explain how you should install SonarQube in this article, but I hope you see the added value.
SonarQube does scan your static code. For me, the greatest benefit is the simplicity of the code, which helps to better understand other developers' code and make fewer mistakes in the code. After the scan, you see metrics of complexity, vulnerabilities, and duplications. This information increases your productivity, maintainability, and consistency. I believe that this can help you to increase the lifetime of the application. In addition, it will help to improve your developer skills.
Each front-end application consists of a lot of packages. These are included in the package.json with a specific version. It is important to update these packages regularly. If you do not, you get a big bang upgrade scenario which takes a lot of time, or more importantly, you could get security issues. Furthermore, your application can be outdated for specific devices. It can make future development hard when you postpone this.
Like using hooks you can use a bot to update these packages. Renovate is one of these bots which you can use for automated dependencies updates. It scans your dependency tree for old versions and updates them via a pull request. First, you should connect Renovate to your repository, so it has permission to read your codebase. Then you can create a renovate.json in the root of your directory. There are some configurations that I can recommend to use. One of these is to auto-merge for a minor or patch update.
“packageRules”: [
{
“matchUpdateTypes”: [
“minor”,
“patch”
],
“automerge”: true
},
]
By default, Renovate will create a PR for every package that can be updated. There is a rule to group packages so that we get fewer pull requests. You can create groups based on a pattern. With this can you bundle packages. In the example below I created a group for eslint. This combines updates for eslint and @next/eslint to a single pull request.
{
“matchPackagePatterns”: [
“^eslint”,
“^@next/eslint”
],
“groupName”: “eslint”
},
There are many more options to configure. For instance, when Renovate should run and how many pull requests are allowed to be open at the same time. Auto merge is useful if you have a good CI/CD pipeline with linters, tests, and finally deployment. The complete documentation can be found on the website.
An example Renovate configuration can be found here.
A great part of this article goes about consistency and automation. The code must be clear for other developers and yourself. The lifetime of your application will be longer when you have good maintainability. A few simple tools and configurations can make the code a lot better. Why not use it? It will not get any worse.
I am working hard for code consistency. In my experience, I want to discuss what the code should do and do not like to talk about personal preferences of coding standards. By automating and maintaining you avoid the discussion. Hopefully, this article is helpful for your challenges with developing front-end applications.