If you’re using a GUI, that means whatever you’re doing you’re not doing a lot of it, since you don’t need to automate it. I would expect a world-class enterprise engineer to be able to automate most tasks, and from that they would be very comfortable with the command line.
Can you do everything with a GUI that you can on a command line? Yeah probably, if the developer is at all the features properly. Can you automate it easily? No not at all. So the more you do something the more you tend to want to deal with the vocabulary of the command line because it’s more expressive and allows for automation.
I will die on this hill!
Documentation too. Frontends change all the time, but CLI tools usually don’t, so you can usually rely on old documentation. But have you ever tried googling how to do something in MS office, found and article from half a year ago and found that none of the things it mentions exist anymore? It’s ridiculous how much time people waste trying to figure out stuff multiple times because it changes so much.
After long periods of not using GUIs, I found myself very confused every time I want to do something. I was trying to insert a code block into Power Point yesterday, took me half an hour of googling and didn’t manage to do it. With Latex, I googled and in 2 minutes I had a code block.
Given that Latex is a clusterfuck of legacy, it speaks volumes that it’s still so much easier to do things there rather than in powerpoint.
With MS office I’ve also adopted a “fuck it, I’ll just take a screenshot” approach.
A collection of screenshots send around by mail after having poorly drawn arrows and frames in it, is official documentation. Source: my corporation
Yup, I tried doing it properly too when I started and now I don’t give a shit. If the company wants us to use crappy tools, that’s what they get.
What are you saying? The project is finished, the new stuff implemented and now you want to buy some fancy software and shedule 100 hours for documentation? We dont need that! Just help out your colleagues, when they have a question. They’ll all know what to do in no time!
Depends on what system you’re running, and especially what task you’re doing. Trying to operate firewall rules via CLI is an exercise in self-inflicted pain, as is trying to set a complex cron schedule without a handy calculator.
on the contrary, CLI is the BEST way to deal with firewall rules.
Personally, I’d take it a step further. Firewall rules should be defined as code in a git repo. So if you’re building rules in a gui, you’re simply doing it wrong. While a cli and/or api should be used, that should be automated and invisible to a human.
TIL there are people configuring firewalls via GUIs. Okay … I‘m do that too on my private equipment because I’m lazy. But it feels wrong doing so in an enterprise context.
You using a Cisco firewall or something?
I’ve been using F5 in the past. Not doing that anymore though.
Junos CLI is a real treat. I work with the SRX line regularly, particularly the SRX4600 and the SRX300 series.
CLI debuggers can’t hold a candle to the Visual Studio debugger. This is generally not something you automate, and I haven’t met many engineers that know
gdb
well. But pretty much anyone can use VS debugger.
Honestly, some things can be done faster/as fast on GUI. So really just use whatever increases your productivity.
IMO GUIs are always faster when it’s something you’ve never used before, or use very infrequently.
CLI is better if you’re used to the task you’re doing, or automating things. But for infrequent tasks looking up the commands (or looking at old notes to find it) is very slow and rather annoying.
Moving files across several subfolder levels tends to be much faster on a GUI. Finding files is usually much faster via CLI, even when you have to look up again how to use the find command of your choice
The more you use the commands the more you remember them. I got good at the CLI by forcing myself to use it for things I would normally do in a GUI. Now everyone thinks I’m a wizard which I won’t discourage
Is there an instant GUI find tool on linux?
find
is very slow compared to using Everything on windows, and sorting results is really hard via CLI.Oh, you’re not aware of “locate”
I am but searching via CLI I’m not sure how to easily sort by last modified time, or restrict to a specific root path first.
I don’t know about GUI tools, but:
Everything is so fast because it uses the index built into NTFS to find files by filename quickly, and NTFS is the definitive file system on Windows so it works everywhere.
On Linux, there isn’t really an index built into the filesystem - some might have that, but I don’t know about it. That said, plocate is a common tool that uses its own index. You have to update the database when files change (you’ll probably have a job doing that daily), but searching the index is very fast.
locate
I usually just make a bat or py script to move and create specific files to specific folders.
I only do this because I’m lazy and numbering, renaming and creating folders is a drag and can be easily automated, but just copy/paste or cut/paste is faster in GUI, especially with alt tab and the new tab file system on windows.
A GUI with a search function is always the best way to deal with filesystems, in my experience.
Always orders of magnitude slower and near-infinitely less featureful, in my experience.
Your filesystem must be monstrously huge if it’s actually perceptibly slow. I also get tired of typing in long filenames with a ton of special characters I have to escape.
You’ve never had to search through hundreds of gigabytes of source files, I guess. Congratulations.
No, I’ve never had that displeasure, Nothing I’ve worked on has been that big. My condolences.
Pshaw! CLI and GUI? Real network engineers make hand crafted API calls!
I love xkcd 🤣
You gotta admit, it’s fun to meme the opposite camp. Whether you are a GUI or CLI person.
I use both. I use the CLI for a lot of stuff but I also use the GitHub Desktop fork for Linux lol. I don’t care how powerful git is in CLI, that gui is just so nice imo
It took me forever to realize I could edit config files in a graphical text editor. When you have a really long file it’s just nicer to have properly formated text wrapping and a scrollbar with a preview box.
Exactly. Use the tools you have the way they fit you best. If it aids your work flow learn the CLI commands you use the most. If it’s something obscure or rarely used, use the gui.
Another not mentioned benefit of becoming comfortable with using the cli is that you then can more easily script stuff.
But you look way cooler when using the terminal for most of your stuff 💁♂️ also using a riced out window manager and riced out Vim config for which you spent hundreds of hours on customizing every aspect of it :p normal people don’t know what the fuck is going on on your pc so you can feel instantly feel superior to those normies! Ah also btw i use arch ;)
To get annoyingly serious on a funny post, the one huge danger of GUIs that I’ve personally witnessed in many of my juniors is that they abstract away the need to understand the tool you’re using.
I regularly use a Git GUI, and I might have to google the rebase command for more complex tasks, but I know how Git works. I know what I can do with rebase, even if I don’t exactly know how to. If you only live in the GUI, you can get far never understanding the system. Until one day, when you fuck up a commit or a push, and you’re totally hosed because there isn’t a pretty button with the exact feature you want in your GUI.
Somehow I’ve made it 7 years without messing up a git command that I couldn’t fix in like 2 seconds. I primarily use vscode’s source controller more featured source controllers like sourcetree feel overly complex and typing out git commands is fine but you spend more time doing that than you would with vscode’s approach. I’m really curious about what you mean by fuck up a commit or push
Try reverting a reverted commit (revert of revert, yes) while other team members are working on a branch which has the first revert. It’s super fun merging after that.
(Or something of that effect, can’t remember the exact details of that fuckup)
I don’t think I will, mostly cause I work on a team of 1 right now which makes my branches wonderfully simple.
Yeah, fuck that. It’s perfectly fine to build a GUI that makes things a bit easier, but make the GUI so that it resembles the fucking workflow. I hate that when I want to automate something thats super easy in the GUI and it takes AGES because there is no equivalent to what I’m doing in the GUI
I hate that when I want to automate something thats super easy in the GUI and it takes AGES because there is no equivalent to what I’m doing in the GUI
glares angrily at Azure CLI
Azure CLI and AzPowershell are somehow so powerful and useful until they fall flat on their face.
So… my only requirement for my tools is that they have a well-supported CLI, and can be installed headless without graphical dependencies. Tools must be scriptable.
That said, it’s nice to have a UI. My ideal configuration is a scriptable tool with a good API, and a separate GUI tool that can drive it.
One of the best tools I’ve used is SuperSlicer. It’s a slicer for 3D printers. It has GUI, it has CLI and it has a DLL/SO so you can add its features to your own application. And it’s open source if linking against an existing library is too hard for you, lol.
“graphical user interfaces make easy tasks easy, while command line interfaces make difficult tasks possible”
- William E. Shotts Jr., The Linux Command Line: A Complete Introduction
It has taken me a long time to get comfortable using a Linux CLI (definitely not as familiar with windows cmd prompt/powershell), and I know that if I log into a box anywhere, If it has
sh
orbash
or some variant of those shells, I’ll be able to get by.Now, on my home server, moving & renaming a bunch of media files has me really wishing I had a DE installed there to Ctrl + click/Drag-n-drop…
Also, I love using VScodium/Code as an IDE bc of its configurability & rich plugin ecosystem – but recently I had some performance hiccups with extensions not playing nice together and started (again) down the masochistic path of configuring neovim to use as an “IDE”…
Why not mount your server as a share and use your desktop GUI to manipulate files? Then you can do both.
Laziness so far haha but yes that’s a good plan
I always feel that graphical interfaces make easy things difficult, in most cases. A bunch of figity clicking around, instead of a few keystrokes I could press with my eyes closed. They are more discoverable, though.
If you use emacs, dired and wdired together are fantastic for managing files like that. You can even run dired over tramp, so you can manage files on a remote server that doesn’t have emacs installed, using the emacs on your desktop. But there are also good cli options, you might want to look at the rename command, as one that’s probably installed by default on any given distro. That’s outside my expertise, though, as I just use emacs.
Yes I’ve used rename! In my case, I just need to rename and reorganize a bunch of movies & associated metadata files into directories. I don’t have too many stored digitally now, so I think just shaving the yak and doing it manually via file share will work for now.
Never been an emacs user… Seems like quite a rabbit hole
Skip the masochism, try helix. Switched to that + zellij with about 20 lines of config and never looked back
Takes a second to get used to the keybindings but after about ~2w you can painlessly switch back and forth between vim and helix pretty much instantly
Helix + zellij huh? I’ll definitely try it out
Feel free to ping me if I can help, at least in the form of starter configs/small hacks that emulate VS Code workflows or something :)
Personally I was the guy that had thousands of lines of Vim and Emacs configurations, so I really had to do this to manage the time sink (like you I had a stint with VS Code in between that eventually stopped working for me)
Yeah, keep telling yourself that buddy.
So far I don’t think anyone has interpreted the meme correctly, the wikiHow guy is supposed to be an obvious shortcoming expressed as a guy trying to convince himself it’s not a problem.
Using the right tool for the right task is a big part of being a good engineer.
thank you.
I think I really only use GUIs if I am learning something new and trying to understand the process/concepts or if I’m doing something I know is too small to automate. Generally once I understand a problem/tool at a deeper level, GUIs start to feel restrictive.
Notable exceptions are mostly focused around observability (Grafana, new relic, DataDog, etc) or just in github. I’ve used gh-dash before but the web ui is just more practical for day to day use.
For context, I’m in SRE. I feel like +90% of my day is spent in kubernetes, terraform, or ci/cd pipelines. My coworkers tend to use Lens but I’m almost exclusively in kubectl or the occasional k9s.
Searching a log file? I want
less
. Searching all log files? I want log aggregation lol.Exactly.
One log file, or all, I want grep or awk, maybe with find in front, possibly throw some jq on top if something is logging big json blobs.
That’s a lot slower at scale than something like Loki.
I feel you. The problem with a lot of Elastic style document search engines is that they don’t ever let you search by very explicit terms because of how the index is built. I believe the pros outweigh the cons but I often wish I could “drop into” grep, less, and others from within the log aggregation tool.
If I knew what I was looking for I could grep all the log files and pipe the output to another file to aggregate them.
The problem is that they’re all on different servers. Once you use log aggregation stuff like DataDog, Splunk, or Kibana you get it, but before it’s hard to see the benefits. Stuff like being able to see a timestamp of when an error first appeared and then from the same place see what other stuff happened around the same time.
If I had dozens or hundreds of servers that would make a huge difference, but for under a dozen I think the cost of setting that all up isn’t worth the added benefit. Plus if the log aggregation goes down (which I’ve seen happen with some really hairy issues) you’re back to grepping files so it’s good to know how.
Totally. I’m talking more from the enterprise perspective. Even apart from that I’m not sure if the cost is worth it at that scale. Even using foss solutions the dev hours setting it up might not be worth it.
Github’s UI is total garbage compared to basic git commands, though.
You can’t manage pull requests, github actions, repo collaborators, permissions, or any number of the dozens of other things github does just from basic git commands.
Both interfaces are important and useful. I spend much time in both and would hate being force to use either for everything.
PSA: Since his finger and the reflection touches, he’s likely looking into a one way mirror. There’s someone behind the glass.
I just walked around my house touching all my mirrors and they all do this. Hope they’re not on to me now… Think I’ll wait for night and try to make a break for it.
Or it’s an extremely thin glass.
Or just a reflective surface.
Or his name is Truman Burbank
Someone told me that windows server UI interface has more options than CLI. I got scared of windows server (how do you repeatedly Setup the same server, with a screenshot documentation ???)
It’s been a while since I’ve found that true. You can do everything you want to do in powershell now days.
Yeah, I think MS started adding PowerShell for everything after server 2012 R2.
First of all, most Windows settings are in the registry, so you don’t have to go to the UI, you can just upload new settings straight into the registry through CLI.
Second, PowerShell exists and it’s awesome!
And third, you can always use UI automation tools if you’re bad at registry and PowerShell. Just record your session and run whenever needed.
Newer versions of Windows can give you the exact Powershell code it’s executing based on what you’ve configured in the gui. This is still extremely inconsistent across Windows services though. I don’t know that I’d feel comfortable running a headless windows server yet. Too much stuff still assumes you’ll use the gui for most things.
To be honest, if you really need Windows servers you should run core if possible. Basically all Microsoft’s management shit can be run remotely from your jump/management host. That said a lot of shit requires gui and refuses to run on core, like adsync
Is there a significant performance difference? I’m assuming the attack surface is lower.
There’s slight difference in resource usage of course, which does scale if you’re unlucky enough to have lot of them.
Minimum ram required is 512mb for core, 2gb for desktop experience so we can safely assume keeping the gui usable eats some 1.5gb memory. 500 servers adds some 750gb overhead in theory.
Then there’s of course the fact that less bloat will generally add up to less problems. Ever rdp to a server and start menu refuses to open or other weird gui shit. That’s just wasting your time.
it makes you a Windows engineer which is worse
"Windows engineer’ lmao.
Hard disagree. I use arch btw
👍👍👍 arch btw 🤤🤤🤤 I use arch btw 🥺🥺🥺 you 🫵🫵🫵🫵🫵🫵🫵 should use arch too btw 👄❤️ I used to be a filthy 🤮 windows 🤮 user 🤮 but now I use arch!!! 🤤🤤 don’t be afraid of the install process, you’re just a dumbass normie 🤓🤓🤓🤓
This but unironically
No, thank you, I’ll be staying on Gentoo tyvm
deleted by creator
Is this your first attempt at emoji pasta?