Pretty dumb, honestly. If anything it just adds a Streisand effect to it as people try to figure out what’s censored.
Not that censoring it has any value whatsoever. Like if a child sees that, so fucking what?
Pretty dumb, honestly. If anything it just adds a Streisand effect to it as people try to figure out what’s censored.
Not that censoring it has any value whatsoever. Like if a child sees that, so fucking what?
Yeah it’s not all that uncommon in school, just increasingly uncommon in industry.
Visual… programming languages? Yikes.
Common Lisp isn’t a functional programming language. Guile being based on Scheme is closer, but I’d still argue that opting into OOP is diverging from the essence of FP.
If the MR is anything bigger than a completely trivial change in a file or 2, it most likely should be broken into multiple commits.
A feature is not atomic. It has many parts that comprise the whole.
Commits should be reasonably small, logical, and atomic. MRs represent a larger body of work than a commit in many cases. My average number of (intentionally crafted) commits is like 3-5 in an MR. I do not want these commits squashed. If they should be squashed, I would have done so before making the MR.
People should actually just give a damn and craft a quality history for their MRs. It makes reviewing way easier, makes stuff like git blame
and git bisect
way more useful, makes it possible to actually make targeted revert commits if necessary, makes cherry picking a lot more useful, and so much more.
Merge squashing everything is just a shitty band-aid on poor commit hygiene. You just get a history of huge, inscrutable commits and actively make it harder for people to understand the history of the repo.
Just remember that if you aren’t actually concatenating files, cat
is always unnecessary.
If you mean for programming specifically, I… don’t, really. At most it would be for a quick sanity check on syntax in a language I don’t write often, for which Google is fine. But otherwise I rely on documentation and search features of the various language/tool-specific websites.
https://porkmail.org/era/unix/award#cat
jq < file.json
cat
is for concatenating multiple files, not redirecting single files.
Meanwhile, I can open a 1GB file in (stock) vim without any trouble at all.
Formatting is what xmllint
is for.
:syntax off
and it works just fine.
I understand what you’re saying—I’m saying that data validation is precisely the purpose of parsers (or deserialization) in statically-typed languages. Type-checking is data validation, and parsing is the process of turning untyped, unvalidated data into typed, validated data. And, what’s more, is that you can often get this functionality for free without having to write any code other than your type (if the validation is simple enough, anyway). Pydantic exists to solve a problem of Python’s own making and to reproduce what’s standard in statically-typed languages.
In the case of config files, it’s even possible to do this at compile time, depending on the language. Or in other words, you can statically guarantee that a config file exists at a particular location and deserialize it/validate it into a native data structure all without ever running your actual program. At my day job, all of our app’s configuration lives in Dhall files which get imported and validated into our codebase as a compile-time step, meaning that misconfiguration is a compiler error.
You’re just describing parsing in statically-typed languages, to be honest. Adding all of this stuff to Python is just (poorly) reinventing the wheel.
Python’s a great language for writing small scripts (one of my favorite for the task, in fact), but it’s not really suitable for serious, large scale production usage.
It sounds funny but it’s not an uncommon phrase.
No, you divide work so that the majority of it can be done in isolation and in parallel. Testing components together, if necessary, is done on integration branches as needed (which you don’t rebase, of course). Branches and MRs should be small and short-lived with merges into master happening frequently. Collaboration largely occurs through developers frequently branching off a shared main branch that gets continuously updated.
Trunk-based development is the industry-standard practice at this point, and for good reason. It’s friendlier for CI/CD and devops, allows changes to be tested in isolation before merging, and so on.
Sure… That"s what libraries are for. No one hand-rolls that stuff. You can do all of that just fine (and, actually, in a lot less code, mostly because Java is so fucking verbose) without using the nightmare that is Spring.
I’d say it’s definitely worth it. I don’t actually use nixos itself, but I do use nix a lot. I have everything I need for work in a home manager configuration, so I can literally just install nix and load up my config and have all programs and configuration of said programs installed and ready to go (on any UNIX system). I started doing this since changing jobs means a new machine, and I got really tired of all of the inconsistencies between machines when bringing over my dotfiles, and having to install a bunch of packages I use every time I changed jobs.
I do want to make the switch from Arch to nixos on my personal machine eventually too, but I hardly spend any time on computers outside of work these days, unfortunately. But the great thing is that my home manager configuration can pretty easily slide right into a nixos configuration, which is what many people do.
Technically “to eat” is the Infinitive form of the verb, and using infinitives as nouns isn’t all that unusual in many languages.
The tooling has improved dramatically since then. There’s now a full-fledged language server (https://haskell-language-server.readthedocs.io/en/stable/), ghcup
(https://www.haskell.org/ghcup/) is now a thing for installing/managing different versions of GHC/cabal/HLS, there’s now formatters (https://github.com/tweag/ormolu) and cabal has modernized significantly and supports multi-package projects much more comfortably now. Nix-based Haskell infrastructure is also now pretty nice. There’s even stuff like https://github.com/srid/haskell-template/blob/master/flake.nix to very quickly get spun up on a new project using Haskell and nix, including vscode, formatter, HLS, and a full development shell with a bunch of useful commands.
Another great modern thing (which powers HLS) is that GHC can now emit .hie
files for each file it compiled, which is basically a standardized representation of the AST for that module that can be consumed/manipulated programatically. Lots of tools can use this. One such tool that’s particularly useful is https://github.com/wz1000/HieDb, which constructs an sqlite database from the information in these files, so you basically can have an index of every symbol definition, reference, export, etc. all readily available to use however you want.
Confused what you mean. OpenAPI has nothing to do with JS.