I've tried nushell before, but backed away after learning I'd have to rewrite my rc's and the scripts I use to streamline my terminal experience.
I was driven to come back because I recently came across starship. I played with it and after 15 minutes was able to recreate my oh-my-posh configuration with very little effort (except for transient prompts).
I was so impressed I immediately added starship to my stow collection and began using it and exploring the scripting language and configuration management.
Ultimately I will be switching to nushell as my login shell once I port all of my scripts because of these features.
Pipelines are a great way to pipe data
Everything in nushell has a datatype, and nushell has re-implemented some of the coreutils within it's environment. This means that not all commands will return plain text like in `bash` or `zsh`.
A very useful example is ls
which outputs a table. This unlocks a lot of helpful features such as sorting and selecting from the output.
What about a list of software projects I might be neglecting
$ ls dev/ | sort-by modified
╭────┬────────────────┬──────┬─────────┬──────────────╮
│ # │ name │ type │ size │ modified │
├────┼────────────────┼──────┼─────────┼──────────────┤
│ 0 │ dev/gtk │ dir │ 12 B │ 2 years ago │
│ 1 │ dev/s0ix │ dir │ 1,012 B │ 2 years ago │
│ 2 │ dev/jupyter │ dir │ 10 B │ 2 years ago │
│ 3 │ dev/framework │ dir │ 8 B │ 2 years ago │
│ 4 │ dev/java │ dir │ 50 B │ a year ago │
│ 5 │ dev/cloud │ dir │ 0 B │ a year ago │
│ 6 │ dev/kalyke │ dir │ 12 B │ a year ago │
│ 7 │ dev/wasm │ dir │ 8 B │ a year ago │
│ 8 │ dev/rust │ dir │ 96 B │ a year ago │
│ 9 │ dev/wikijs │ dir │ 8 B │ a year ago │
│ 10 │ dev/extern │ dir │ 40 B │ a year ago │
│ 11 │ dev/scratch │ dir │ 0 B │ a year ago │
│ 12 │ dev/blog │ dir │ 50 B │ a year ago │
│ 13 │ dev/gh │ dir │ 28 B │ a year ago │
│ 14 │ dev/play │ dir │ 164 B │ a year ago │
│ 15 │ dev/go │ dir │ 26 B │ a year ago │
│ 16 │ dev/bugs │ dir │ 28 B │ 9 months ago │
│ 17 │ dev/python │ dir │ 20 B │ 9 months ago │
│ 18 │ dev/callisto │ dir │ 240 B │ 8 months ago │
│ 19 │ dev/interviews │ dir │ 8 B │ 5 months ago │
│ 20 │ dev/node │ dir │ 48 B │ 2 months ago │
│ 21 │ dev/leetcode │ dir │ 18 B │ 2 months ago │
│ 22 │ dev/resume │ dir │ 210 B │ 2 months ago │
│ 23 │ dev/web │ dir │ 28 B │ 4 days ago │
│ 24 │ dev/interview │ dir │ 96 B │ a minute ago │
│ 25 │ dev/iac │ dir │ 42 B │ a minute ago │
├────┼────────────────┼──────┼─────────┼──────────────┤
│ # │ name │ type │ size │ modified │
╰────┴────────────────┴──────┴─────────┴──────────────╯
List of code projects in ~/dev sorted by modified date
The authors have great attention to detail and include the column names at the top and bottom of long tables.
Lets look at another common use case, such as getting the 5 largest binaries installed by my package manager.
$ ls /usr/bin/ | sort-by size | reverse | first 5
╭───┬───────────────────┬──────┬──────────┬──────────────╮
│ # │ name │ type │ size │ modified │
├───┼───────────────────┼──────┼──────────┼──────────────┤
│ 0 │ /usr/bin/helm │ file │ 79.0 MiB │ 3 months ago │
│ 1 │ /usr/bin/lapce │ file │ 57.9 MiB │ a year ago │
│ 2 │ /usr/bin/podman │ file │ 44.5 MiB │ a month ago │
│ 3 │ /usr/bin/lto-dump │ file │ 34.9 MiB │ 2 weeks ago │
│ 4 │ /usr/bin/nu │ file │ 34.6 MiB │ 3 months ago │
╰───┴───────────────────┴──────┴──────────┴──────────────╯
Top 5 largest files in /usr/bin
I am actually quite surprised helm is larger than lapce!
It is easy to support external binaries in nu
I currently maintain a cloud platform for a software organization. Kubectl outputs a plaintext table by default. Nushells answer for many of these situations is detect columns which will pick up on the delimiter used and convert the output into a table. We can streamline this in our config.nu
alias k = kubectl
# Custom command
def kout [] {
detect columns
}
This enables us to do things like detect the most restarting pods in our dev environment
k get po -n dev | kout | sort-by RESTARTS | first 5
It has Rust's match syntax
I love Rust's match
statement. It's a very intuitive way to handle flow control and helps express error handling much more concisely than if
or case
statements.
Back on the topic of kubernetes, I often use two functions I've named chk
and chn
, which help me to streamline the process of switching between kube contexts, and cluster namespaces. Using nushell will not speed me up if I cannot use these with ease. So I had to convert them into nushell commands.
Starting with chn
whose POSIX implementation looks like this:
function chn() {
if [ -z $1 ]; then
kubectl config view --minify -o 'jsonpath={..namespace}'
else
kubectl config set-context --current --namespace=$1
fi
}
As a naive first attempt for chn
I maintained the if else structure.
def chn [ns: string] {
if not $ns {
`kubectl config view --minify -o 'jsonpath={..namespace}'`
} else {
`kubectl config set-context --current --namespace=$ns`
}
}
There are two problems with this approach. First, the backticks resolve to a literal $ns
and not our first argument. Additionally if we ran chn
without an argument, it will not print the current namespace, instead it will throw an error because chn
requires at least on argument.
Nu shell scripting language feels a lot more like a simplified rust or typescript syntax. Notably it requires you to define an input argument as optional with ?
.
After even further reading, I found the support for match. Ultimately I ended up with these as my new commands.
def chk [ctx?: string] {
match $ctx {
null => (kubectl config get-contexts)
_ => (kubectl config use-context $ctx)
}
}
def chn [ns?: string] {
match $ns {
null => (kubectl config view --minify -o 'jsonpath={..namespace}')
_ => (kubectl config set-context --current --namespace $ns)
}
}
With match, modules, and unit testing Nushell feels more like a programming language than a scripting language
It is cross-platform
Automating gitops through ci workflows involves a lot of headless scripting. Primarily we do this in bash
and sometimes introduce python
if the problem ends up needs more complex error handling.
Neither of these are easy to distribute. With bash
, if you're not careful you may find that some of Mac's coreutils do not behave the same as their GNU couterparts (looking at you grep
and date
🫠)
The problem is python is that once you pull in a dependency you unknowingly get pinned to a specific glibc version (try running az
cli in a older ubuntu container) and you have to install, or cache, those dependencies each workflow run. I hope your requirements.txt
/ pyproject.toml
/ uv.yaml
are pinning the correct versions.
I was very happy to see that I can copy my config.nu
from my Framework 13 (Fedora) to my work Macbook and my aliases and scripts all work without issue. I will still run into date -d
and need to install grep from homebrew if I want perl regex.
These scriptsscripts also work out of the box on Windows. Nu offers cross platform compatibility for /
even when executed outside of wsl.
Writing nu scripts tells a better story long term, especially knowing it's type safe and has significantly easier error handling than POSIX scripts.
It has a built in http client
Back on the topic of ci workflows. Sometimes commonly used is piping curl
and wget
into yq
and jq
to consume serialized data from apis. We cannot assume yq
/jq
are available and write reusable actions to install them that become boilerplate in all of our workflows.
You may think "why not simply use curl? 🤌". Well, what if this:
curl https://jsonplaceholder.typicode.com/comments | jq '.[-5:-1][].name' > 5-most-recent-comments.json
Could become this:
http get https://jsonplaceholder.typicode.com/comments | last 5 | get name | to json | save 5-most-recent-comments.json
Yes it's a tad longer, however it's more intuitive to a human, reads more clearly, and is significantly less error prone than jq's dsl.
When using workflow matrices, or approval gates on workflow environments you cannot use act. As a result your feedback loop extends significantly to the process of pushing to a branch and waiting for the pipeline output for show any errors. This means time spent on errors adds up significantly when you're building workflows in bash. And Python could become a ticking timebomb waiting to aggravate you later when workflow runners or the python version are updated.
Replacing these with unit tested nu scripts has me more confident that I won't accidentally push a regression and cause headaches later.
Its a nu day, and a nu life, and I'm feeling good
And so here I am, porting my POSIX scripts to nu in my free time. It will cost me a bit of time, but the time savings when it comes to maintenance will compound over time.
And lets me real, no body maintains things for fun, we're here to build.
🍻