“I can’t see a thing, I’ll open this one”.
“I can’t see a thing, I’ll open this one”.
I can’t shake this feeling that these are lacking something, like I remember looking at Fira for the first time and being like wow, even jetbrains mono had a sort of generic charm. These on the other hand, are just meh.
Maybe they are someone’s cup of tea though. I am sure in 6 months I will be hearing about how GitHub invented the developer font of some rubbish like that.
Sure, so say so you have a requirement to add two numbers
You write a test that has two two inputs and the output Run the test and watch it fail, check that the output is what you expect so you know the test is working
At this point you have no implementation and you use this opportunity to confirm that the test will work, by checking it is failing how you expect. If you are pairing sometime I teach that you should call out what you expect, kinda like in American pool. Sometimes the test passed in this case, this is your opportunity to break the test and confirm it will fail (though this is often a sign you did too much work previously, and might need to check if you really are making the smallest possible change)
Do the minimum to correct the problem described by the failing test (you can follow the transformation priority premises here if you are familiar with it)
At this point you have only implemented the simplest possible code, this makes it really easy to spot if there is a problem with it because of some flaw in the test, and you have confirmed the it matches your test
What’s more you can confirm all, and only behaviours described in the test are implemented
Look at the code and decide if you can simplify it, do any refactoring
Got to clean the kitchen because if we don’t clean the kitchen we will have to clean the garage and we don’t want that because it’s a bigger job.
repeat
Why this works is that the code is developed in a TDD style forces you to move in smaller steps meaning bugs are shallower when they do occur. You aren’t dealing with a 20 complex lines, you are dealing with a return const, or selection, or etc. The scope for the test being wrong is reduced and the amount of implementation is reduced, generally the tests end up more concise and smaller too and the interfaces are user friendly too because you didn’t think how do I calculate this, you thought what would be a nice way to call this.
What’s more it encourages an example driven approach that leads to developers thinking about the most sensible input data over and over again, and what that should output reducing the chance any one wrongly implemented test wouldn’t be picked up by other examples.
TL;DR, the driven word is the key, a test that is illogical will never drive you to the working code
Contrats, you have discovered why in TDD you write the test, watch the test fail, then make the test pass, then refactor. AKA: Red, Green, Refactor
I have done this too. Shit happens.
One of my co-workers used to write UPDATE
statements backwards limit then where etc, to prevent this stuff, feels like a bit of a faff to me.
I must admit that I do like the built in page translation, which I guess was made by a similar team using ML and all. Maybe I will like this too? Feels a bit… niche. Maybe it’s a stepping stone to any misinformation at some point?
Edit This actually might not be coming as a browser feature at all. Mozilla is trying to increase the size of their Mozilla.ai team, so perhaps it’s really looking for people with AI knowledge with web tech and a track record of using it for a ethical purpose. This team would be well placed to build pretty much any AI based tool for the firefox ecosystem.
Deployments and deployment frequency pretty squarely a developer’s responsibility…
This is fantastic work to an immediate problem. Thank you.
There is something amazing about someone just sharing a solution like this without expectation of anything back, and even if this isn’t the best right solution, it contributes to the global commons, and improves society.
It always did.