• 6 Posts
  • 103 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2024

help-circle








  • The exact format depends on the source file format, the platform of the player, the duration of the clip, encryption, and whether it’s copyrighted material or not. Also, if it’s older software or fairly recent (the current schemes stand on the carcasses of a lot of old formats).

    If the source is a single file, it’s likely MP4 or WebM (or MOV on Apple and AVI on Windows). The video player can start downloading the whole thing in a background thread. When it has enough material buffered, it can start decoding and playback. However, if there is a network glitch, the video may start pausing and stuttering. This is typically how unprocessed video is served from a cloud file storage site.

    Many sites use HLS or MPEG-DASH (or their superset CMAF) to send the video in adaptive chunks. The user-experience is much better and has better server utilization. The manifest files describe which chunks to get depending on current bandwidth. Players can then up or downscale their next request based on network conditions to avoid stuttering. Overloaded servers can also downthrottle the chunk formats on-the-fly.

    Apple device native video players only support HLS/CMAF, and inside native appstore apps, files over 10 minutes must be HLS formatted. Non-Apple devices may use either format.

    Then there’s encryption. If a decryption key (often AES-128) is provided, the player can download it over https, then decrypt the stream on the fly. This is so anyone sniffing the stream only sees encrypted content.

    If the material is copyrighted, it may have DRM. On Apple devices this is likely FairPlay. On Windows it could be PlayReady, and on Android and for some browsers, it could be Widevine. Then there’s CENC, which use a common encryption format so the same stream can have PlayReady or Widevine.

    Most browsers support HLS, since it’s delivered over HTTP, it’s adaptive, and tools like ffmpeg or handbrake can generate all the files and chunks one time, once a video file is uploaded. The chunks can be hosted anywhere HTTP is served.

    This is all for one-way, one file, one viewer mode. If the video stream is meant to be two-way or multicast to lots of viewers, you’ll want to head into the world of WebRTC, RTMP, and RTSP.









  • fubarx@lemmy.worldtoSelfhosted@lemmy.worldLogwatch
    link
    fedilink
    English
    arrow-up
    3
    ·
    16 days ago

    Saw a posting this past week on SSD drive failures. They’re blaming a lot of it on ‘over-logging’ – too much writing trivial, unnecessary data to logs. I imagine it gets worse when realtime data like OpenTelemetry get involved.

    Until I saw that, never thought there was such a thing as ‘too much logging.’ Wonder if there are any ways around it, other than putting logs on spinny disks.


  • This was actually pretty normal for the last couple of tech jobs. Screening call with recruiter, phone screen with hiring manager, then a full-day on-site or zoom with 3-6 rounds. Sometimes, they would ask for a presentation about past work before the on-site. The number of rounds isn’t indicative of anything in big tech. Smaller companies do fewer and can’t afford to be as picky.

    The problem is that after you’ve signed the offer and given your notice, you’re going on faith that the new company offer isn’t going to fall off. I’ve heard it happen a few times, usually when new company puts on a full hiring freeze.

    It sucks, and the only way to mitigate it is to be talking to multiple companies when you decide to make a move.



  • I mainly use it to create boilerplate (like adding a new REST API endpoint), or where I’m experimenting in a standalone project and am not sure how to do something (odd WebGL shaders), or when creating basic unit tests.

    But letting it write, or rewrite existing code is very risky. It confidently makes mistakes, and rewrites entire sections of working code, which then breaks. It often goes into a “doom loop” making the same mistakes over and over. And if you tell it something it did was wrong and it should revert, it may not go back to exactly where you were. That’s where frequently snapshotting your working code into git is essential, and being able to reset multiple files back to a known state will save your butt.

    Just yesterday, I had an idea for a WebGL experiment. Told it to add a panel to an existing testing app I run locally. It did and after a few iterations, got it working. But three other panels stopped working, because it decided to completely change some unrelated upstream declarations. Took 2x time to put everything back to where it was.

    Another thing to consider is that every X units of time, you’ll want to go back and hand edit the generated material to clean up sloppy code. For example, inefficient data structures, duplicate functions in separate sections, unnecessarily verbose and obvious comments, etc. Also, better if using mature tech (with lots of training examples) vs. a new library or language.

    If just starting out, I would not trust AI or vibe coding. Build things by hand and learn the fundamentals. There are no shortcuts. These things may look like super tools, but they give you a false sense of confidence. Get the slightest bit complex, and they fall apart and you will not know why.

    Mainly using Cursor. Better results with Claude vs other LLMs, but still not perfect. Paid versions of both. Have also tried Cline with local codegen through Llama and Qwen. Not as good. Claude Code looks decent, but the open-ended cost is too scary for indie devs, unless you work for a company with deep pockets.