• 0 Posts
  • 114 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle

  • Here’s a tip on good documentation: try to write the documentation first. Use it as your planning process, to spec out exactly what you’re going to build. Show the code to people (on GitHub or on a mailing list or on lemmy or whatever), get feedback, change the documentation to clarify any misunderstandings and/or add any good ideas people suggest.

    Only after the docs are in a good state, then start writing the code.

    And any time you (or someone else) finds the documentation doesn’t match the code you wrote… that should usually be treated as a bug in the code. Don’t change the documentation, change the code to make them line up.


  • Sure we can make a different ticket for that to move this along, but we’re getting product to agree first.

    Ooof, I’m glad I never worked there.

    QA’s job should be to make sure the bugs are known/documented/prioritised. It should never be a roadblock that interrupts work while several departments argue over what to do with a ticket.

    Seriously who cares if the current ticket is left open with a “still need to do XYZ” or it gets closed and a new one is open “need to do XYZ”. Both are perfectly valid, do whichever one you think is appropriate and don’t waste anyone’s time discussing it.


  • If any of that is part of the hiring process - I don’t want the job.

    If HR is incompetent enough to consider things like relationship status or political opinions then what other bullshit policies does the company have? It’s probably the tip of the iceberg.

    By far most important thing is to have good colleagues, because without good colleagues your job will be miserable or the company will not last (or both). Made the mistake of working for a shitty job at high pay once and it was one of the worst decisions of my life.

    Don’t waste your life working for incompetent companies.

    Also, as someone who has hired devs… if you have a public profile, and it doesn’t make you look hopelessly incompetent, then your application is going onto my shortlist. Too many applications cross my desk to look at all of them properly, so a lot of good candidates won’t even get considered. But if there’s a GitHub or similar profile, I’m going to open it, and if I see green squares… you’ve got my attention.

    You’ll get my attention wether the username matches your real name or not, but bonus points if it’s your real name. Openness leads to trust. And trust is criitcal.


  • Yeah I’ve given up on integration tests.

    We have a just do “smoke testing” — essentially a documented list of steps that a human follows after every major deployment. And we have various monitoring tools that do a reasonably good job detecting and reporting problems (for example, calculating how much money to charge a customer is calculated twice by separate systems, and if they disagree… an alert is triggered and a human will investigate. And if sales are lower than expected, that will be investigated too).

    Having said that, you can drastically reduce the bug surface area and reduce how often you need to do smoke tests by moving as much as possible out of the user interface layer into a functional layer that closely matches the user interface. For example if a credit card is going to be charged, the user interface is just “invoice number, amount, card detail fields, submit, cancel”. And all the submit button does is read every field (including invoice number/amount) and send it to an API endpoint.

    From there all of the possible code paths are covered by unit tests. And unit tests work really well if your code follows industry best practices (avoid side effects, have a good dependency injection system, etc).

    I generally don’t bother with smoke testing if nothing that remotely affects the UX has changed… and I keep the UX as a separate project so I can be confident the UX hasn’t changed. That code might go a year without a single code commit even on a project with a full time team of developers. Users also appreciate it when you don’t force them to learn how their app works every few months.



  • The long-term popularity of any given tool for software development is proportional to how much labour arbitrage it enables.

    Right. Because if you quote $700,000 to do a job in C/C++, and someone else quotes $70,000 to do the same job in JavaScript… no prizes for correctly guessing who wins the contract.

    But that’s not the whole story. Where C really falls to shit is if you compare it to giving the JavaScript project $500,000. At that point, it’s still far cheaper than C, but you can hire a 7x larger team. Hire twice as many coders and also give them a whole bunch of support staff (planning, quality assurance, user experience design, a healthy marketing budget…)

    JavaScript is absolutely a worse language than C/C++. But if you compare Visual Studio to Visual Studio Code (with a bunch of good plugins)… then there’s no comparison VSCode is a far better IDE. And Visual Studio has been under active development since the mid 90’s. VSCode has existed less than half that long and it has already eclipsed it, despite being backed by the same company, and despite that company being pretty heavily incentivised to prioritised the older product (which they sell for a handsome profit margin, while the upstart is given away for free).

    I learned C 23 years ago and learned JavaScript 18 years ago. In my entire life, I’ve written maybe 20,000 lines of C code where I was actually paid to write that code and I couldn’t possibly estimate the number of lines of JavaScript. It’d be millions.

    I hate JavaScript. But it puts food on the table, so turn to it regularly.

    Large Language Models are a remarkable discovery that should, in the long term, tell us something interesting about the nature of text. They have some potentially productive uses. It’s destructive uses and the harm it represents, however, outweigh that usefulness by such a margin that, yes, I absolutely do think less of you for using them. (You can argue about productivity and “progress” all you like, but none of that will raise you back into my good opinion.)

    Yeah you’re way off the mark. Earlier today I added this comment to my code:

    // remove categories that have no sales

    For context… above that comment was a fifty lines of relatively complex code to extract a month of sales data from several database tables, and summarise it down to a simple set of figures which can be used to generate a PDF report for archival/potential future auditing purposes. Boring business stuff that I’d rather not work on, but it has to be done.

    The database has a bunch of categories which aren’t in use currently (e.g. seasonal products) and I’d been asked to remove them. I copy/pasted that comment from my issue tracker into the relevant function, hit enter, and got six lines of code. A simple map reduce function that I could’ve easily written in two minutes. The AI wrote it in a quarter of a second, and I spent one minute checking if it worked properly.

    That’s not a “potential” productivity boost, it’s a big one. Does that make me worse at my job? No - the opposite. I’m able to focus all of my attention on the advanced features of my project that separate it from the competition, without getting distracted much by all the boring shit that also has to be done.

    I’ve seen zero evidence of LLM authored code being destructive. Sure, it writes buggy code sometimes… but humans do that too. And anyone with experience in the industry knows it’s easier to test code you didn’t write… well guess what, these days I don’t write a lot of my code. So I’m better equiped to catch the bugs in it.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    2 months ago

    PowerShell is heads and shoulders over bash

    Sure… but that’s a low bar. Bash is basically the worst shell of them all (if you exclude the ones that are so bad nobody uses them).

    I’m a fan of fish personally. It’s both concise and feature rich. The fish language isn’t something I’d want to write a complex shell script in, but it’s a thousand times better than bash and if you’re writing something complex then I’d argue any scripting language is the wrong tool for the job. Including Power Shell. You should be using a proper programming language (such as C#).

    PowerShell is innovative, for sure. But string output/input shells scripting wasn’t broken (unless you use bash) and I’m convinced trying to fix it by replacing simple string input/output with objects was a mistake.

    I love OOP. However I also love using existing tools that work well, and none of those tools are designed to be OOP.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    Swift is a relatively young language (ten years) and ~3 years ago a lot of things were brand new or had relatively recently been refactored. Also Apple released it to the public before it was really finished - and did the final work finishing off the language out in the open with collaboration as any well run open source project should be developed.

    For the first five years or so it was barely ready for production use in my opinion, and it sounds like tried to use it when it was ready but still had a bit of rough edges such as docuemntation.

    For example async/await was added around when you looked into it and that obviously made a lot of documentation relating to threads out of date (from memory they pretty much deleted all of it and started the long process of rewriting it, beginning with very basic docs to get broad coverage before having detailed coverage).

    And before that the unicode integration wasn’t quite right yet — they did the tough work to make this work (most of these characters are multiple bytes long):

    let greeting = "Hello, 🌍! こんにちは! 🇮🇳"
    
    for character in greeting {
        print(character)
    }
    

    And this evaluates to true even though the raw data of the two strings are totally different:

    let eAcute: Character = "\u{E9}" // é
    let combinedEAcute: Character = "\u{65}\u{301}" // e followed by  ́
    
    if eAcute == combinedEAcute {
        print("Both characters are considered equal.")
    }
    

    I hated how strings were handled in the original Swift, but it’s great now. Really hard to write documentation when fundamentals like that are changing. These days though, swift is very stable and all the recent changes have been new features that don’t affect existing code (or already written documentation). For example in 2023 they added Macros - essentially an inline function (your function won’t be compiled as an actual function - the macro code will be embedded in place where it’s “called”).

    You define a Macro like this:

    macro func assertNotNil(_ value: Any?, message: String) {
        if value == nil {
            print("Assertion failed: \(message)")
        }
    }
    

    And then if you write this code:

    assertNotNil(optionalValue, message: "optionalValue should not be nil")
    

    … it will compile as this:

    if optionalValue == nil {
        print("Assertion failed: optionalValue should not be nil")
    }
    

    Notice it doesn’t even do any string interpolation at runtime. All of that happens at compile time! Cool stuff although it’s definitely a tool you can shoot yourself in the foot with.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 months ago

    is [Swift] using fibers instead of real OS threads(?)

    There are similarities to fibres, I’d say they’re solving the same problem, but they’re not really the same thing.

    It uses libdispatch which is built on top of “real OS threads” though on some kernels (especially BSD ones) it takes advantage of non-POSIX standard OS threading features (it can run without them, but runs better with them).

    Essentially you give the dispatcher a set of tasks, and the dispatcher will execute those tasks on as many threads as makes sense for the available hardware (this is where kernel integration helps, because the number of available CPU cores depends on the current workload for all processes, not just your process, and it changes from one millisecond to the next). The general goal is to keep every CPU core fully loaded, unless QoS has been set low which would prioritise battery life (and might, for example, use “efficiency cores” or avoid “turbo boost”).

    What are these things/what is this comparing?

    It’s a simple variable assignment, such as person.name = "bob", where the compiler has recognised the value might be accessed concurrently from multiple threads and may need to pause execution for a task that is waiting for the lock to be released. But of course, since the goal is to keep the CPU core active, it doesn’t just pause the code running on that core - it’s likely to start executing something else on the same core while waiting.

    OperationQueue and DispatchQueue are basically the same, but OperationQueue has additional scheduling / dependency features which come with a slight performance overhead. If your execution tasks are slow and have minimal interaction with other threads, such as resizing half a million images, then you wouldn’t notice the performance overhead. But if you have very short operations that interact with each other (or if you just don’t need any of those management features) then DispatchQueue is the way to go.

    They’re largely the same in basic use:

    dispatchQueue.async {
       // your code here
    }
    
    operationQueue.addOperation {
     // your code here
    }
    

  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    23
    arrow-down
    5
    ·
    edit-2
    2 months ago

    What? That’s not what happened at all.

    JavaScript was built entirely by Netscape from the ground up with no external involvement, and that was totally their “initial plan”. It shipped in testing (under the name “LiveScript”) within months of Netscape 1.0.

    And then Sun Microsystems paid Netscape a lot of money to rename it to JavaScript and pretend it was related to Java, even though it had nothing at all to do with Java.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    21
    arrow-down
    3
    ·
    edit-2
    2 months ago

    Swift. It’s an amazing language with clear but concise syntax that has every language feature you could ever want including some really nice ones that are quite rare (not unique, but certainly rare especially all in one place).

    The most unique and meaningful features are the memory manager and thread manager.

    Write a tight loop in Swift vs almost any other language, there’s a good chance Swift will use orders of magnitude less memory - because it’s better at figuring out when you’re done with a variable and don’t need that five megapixel image anymore. And it’s fast too — memory management isn’t something we rarely need to worry about in most languages but in Swift it’s more like “almost never” instead of just “rarely”.

    The threading system is so efficient that calling something on another thread is almost as fast as an ordinary function call, and the code to do it is also almost as clean as a function call, as well as all the tools you need to allow efficient (and reliable) data transfer between threads. The few mistakes you can make are usually caught by the compiler. Swift programmers use threads all the time even if they don’t really need to. It’s nothing like other languages where threads introduce unnecessary complexity/bugs or performance bottlenecks.

    Seriously look at this comparison of DispatchQueue and OperationQueue of a thread locked operation (setting a shared variable). A million operations in a bit over zero seconds vs nearly 30 seconds and the kicker is the 30 second one isn’t “slow” — a lot of thread safety management tools would be minutes if you tried to do that (these two comparisons are both done in Swift, which has several different ways you can deal with thread safety):

    Swift is fully open source and cross platform. But unfortunately it hasn’t taken off outside of Apple platforms, and therefore it lacks comprehensive libraries for a lot of commonplace tasks particularly when it comes to server side development or interacting with the Window Manager on Windows/Linux/Android.



  • For anyone who wants to go down that path — I recommend reading (or listening to) “Build” Tony Fadell:

    https://www.buildc.com/the-book

    Fadell worked in the R&D department of various companies you may have heard of such as Apple, Google, Sony, Philips, Motorola, Toshiba… as well as others you likely haven’t heard of (because they were moonshots that explode mid flight - one tried to invent an iPhone in 1990 when touch screens were shit, networking was wired, and batteries were disposable. Talk about ahead of it’s time). That failure wasn’t wasted - a lot of the mistakes they made were avoided years later when he was leading one of the teams that invented the iPhone (not from scratch, but by learning from past work).

    He’s now basically retired and spends his time advising the next generation, but that book covers all the basics. It really is a gift to the billions of people who can’t ask an experienced product developer for advice. And if you are lucky enough to be able to ask, the book will give you enough knowledge to avoid asking stupid questions.


    1. Write down who your customers are.
    2. Write down what problem your customers have, which can be solved with your product.
    3. Write down how your product can solves the problem.
    4. Figure out how you can achieve that goal (this needs to be separate to step 3 - you’re essentially tackling the same thing from a different perspective… helping you see things that might not be visible from the other one).
    5. Anything that does not bring your product closer to the goal(s), remove it from your product.
    6. Anything that would bring you closer, but isn’t achievable (not enough resources, going to take too long, etc), remove those as well.

    Those are not singular items. You have multiple customers (hopefully thousands). Your customers have multiple problems your product can solve. Actually solving all of those problems will require multiple things.

    If that list sounds functional, that’s because good design is functional. Aesthetics matter. Why do you choose a black shirt over an orange shirt with rainbow unicorns? Take the same approach for the colours on your app icon. Why do you choose jeans that are a specific length? Take that same approach for deciding how many pixels should separate two buttons on the screen.


    You said you struggle with looking at a blank page. Everyone does. If you follow the steps I outlined above, then you won’t be working with a blank page.



  • Sure - for example we migrated all our stuff from MySQL to MariaDB.

    It was completely painless, because all of the source code and many of the people who wrote that code migrated to MariaDB at the same time. They made sure the transition was effortless. We spent a months second guessing ourselves, weighing all of our options, checking and triple checking our backups, verifying everything worked smoothly afterwards… but the actual transition itself was a very short shell script that ran in a few seconds.

    I will never use a proprietary database unless it’s one I wrote myself and I’d be extremely reluctant to do that. You’d need a damned good reason to convince me not to pick a good open source option.

    My one exception to that rule is Backblaze B2. I do use their proprietary backup system, because it’s so cheap. But it’s only a backup and it’s not my only backup, so I could easily switch.

    I’m currently mid transition from MariaDB to SQLite. That one is more complex, but not because we did anything MariaDB specific. It’s more that SQLite is so different we have a completely different database design (for one thing, we have hundreds of databases instead of just one database… some of those databases are less than 100KB - the server just reads the whole thing into RAM and slow queries on our old monolithic database are less than 1 millisecond with this new system).

    never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries

    Yeah we don’t do anything like that. All the data in our database is in a JSON type (string, number, boolean, null) with the exception of binary data (primarily images). It doesn’t even distinguish between float/int - though our code obviously does. All of the queries we run are simple “get this row by primary key” or "find all rows matching these simple where clauses. I don’t even use joins.

    Stored procedures/etc are done in the application layer. For example we don’t do an insert query anywhere. We have a “storage” object with simple read/write functions, and on top of that there’s an object for each model. That model does all kinds of things, such as writing the same data in different places (with different indexes) and catching “row not found” failures with an “ok, lets check if it’s in this other place”. That’s also the layer we do constraints which includes complex business rules, such as “even if this data is invalid — we will record it anyway, and flag it for a human to follow up on”.



  • Is there some huge benefit that I’m missing?

    For example I recently fixed a bug where a function would return an integer 99.9999% of the time, but the other 0.0001% returned a float. The actual value came from a HTTP request, so it started out as a string and the code was relying on dynamic typing to convert that string to a type that could be operated on with math.

    In testing, the code only ever encountered integer values. About two years later, I discovered customer credit cards were charged the wrong amount of money if it was a float value. There was no exception, there was nothing visible in the user interface, it just charged the card the wrong amount.

    Thankfully I’m experienced enough to have seen errors like this before - and I had code in place comparing the actual amount charged to the amount on the customer invoice… and that code did throw an exception. But still, it took two years for the first exception to be thrown, and then about a week for me to prioritise the issue, track down the line of code that was broken, and deploy a fix.

    In a strongly typed language, my IDE would have flagged the line of code in red as I was typing it, I would’ve been like “oh… right” and fixed it in two seconds.

    Yes — there are times when typing is a bit of a headache and requires extra busywork casting values and such. But that is more than made up for by time saved fixing mistakes as you write code instead of fixing mistakes after they happen in production.


    Having said that, I don’t use TypeScript, because I think it’s only recently become a mature enough to be a good choice… and WASM is so close to being in the same state which will allow me to use even better typed languages. Ones that were designed to be strongly typed from the ground up instead of added to an existing dynamically typed language.

    I don’t see much point in switching things now, I’ll wait for WASM and use Rust or Swift.


  • The big difference is Memcached is multi-threaded, while Redis is single threaded.

    That makes Redis more efficient - it doesn’t have to waste time with locks and assuming the server isn’t overloaded any individual operation should be faster in Redis. Potentially a lot faster.

    But obviously by sharing the load across threads as Memcached does, and that cant theoretically allow higher throughput under high load… if your task is suited to multithreading and doesn’t involve a shedload of contested locks.

    Which one is a better choice will depend on your task. I’d argue don’t limit yourself to either one, consider both, pick the one that aligns best with your needs.