

Cloudflare does use AI to generate tarpit content, but it’s too expensive to run for every request. IIRC, they periodically change and cache the output to throw off the tarpit detectors.
One thing most DON’T do is change up the format for the page, so the placement and number of links randomize.
[ Resume in Comic Sans ]
Alaska passed it. The election results didn’t go as expected. Everyone in one party (guess) freaked out and started passing bans nationwide.
They tried to repeal RCV in Alaska too, but it failed by a slim count even after 100:1 repeal money advantage. They’ll probably try again: https://alaskapublic.org/elections/2024-11-20/alaskas-ranked-choice-repeal-measure-fails-by-664-votes
Edit: misread the fundraising number.
Likely DRM content. Can’t help you there.
Many applications will be waiting until cryptography and psycopg add GIL-free support.
The exact format depends on the source file format, the platform of the player, the duration of the clip, encryption, and whether it’s copyrighted material or not. Also, if it’s older software or fairly recent (the current schemes stand on the carcasses of a lot of old formats).
If the source is a single file, it’s likely MP4 or WebM (or MOV on Apple and AVI on Windows). The video player can start downloading the whole thing in a background thread. When it has enough material buffered, it can start decoding and playback. However, if there is a network glitch, the video may start pausing and stuttering. This is typically how unprocessed video is served from a cloud file storage site.
Many sites use HLS or MPEG-DASH (or their superset CMAF) to send the video in adaptive chunks. The user-experience is much better and has better server utilization. The manifest files describe which chunks to get depending on current bandwidth. Players can then up or downscale their next request based on network conditions to avoid stuttering. Overloaded servers can also downthrottle the chunk formats on-the-fly.
Apple device native video players only support HLS/CMAF, and inside native appstore apps, files over 10 minutes must be HLS formatted. Non-Apple devices may use either format.
Then there’s encryption. If a decryption key (often AES-128) is provided, the player can download it over https, then decrypt the stream on the fly. This is so anyone sniffing the stream only sees encrypted content.
If the material is copyrighted, it may have DRM. On Apple devices this is likely FairPlay. On Windows it could be PlayReady, and on Android and for some browsers, it could be Widevine. Then there’s CENC, which use a common encryption format so the same stream can have PlayReady or Widevine.
Most browsers support HLS, since it’s delivered over HTTP, it’s adaptive, and tools like ffmpeg or handbrake can generate all the files and chunks one time, once a video file is uploaded. The chunks can be hosted anywhere HTTP is served.
This is all for one-way, one file, one viewer mode. If the video stream is meant to be two-way or multicast to lots of viewers, you’ll want to head into the world of WebRTC, RTMP, and RTSP.
I love that so many people know how to do this.
Was using Bruno and RapidAPI (fka MacPaw) locally, but they couldn’t do conditional sequences properly. Switched to Postman, which could, despite knowing they kept everything on the cloud.
Guess I’m switching back 🤦🏻♂️
Return them to original form. 30 second pulse in blender. Throw in a cup of coffee and some non-dairy creamer.
Drink on way to work through a fat straw.
It actually is pretty incredible. You get paid to draw boxes with icons, and arrows connecting them.
It’s the felt, iron-on letterform of Cooper Black (for copyright reasons, sometimes called Black Cooper).
Been around since the last century.
In case needed, may want to also look into multi-arch images so it also supports the right ARM build for the Pi: https://www.docker.com/blog/multi-arch-images/
You’re overpaying 😁
Cloudflare static web hosting, including TLS/SSL, DDOS protection, WAF, and AI scraper protection, are all free: https://softwareonbudget.com/blog/how-to-host-static-website-for-free-with-cloudflare-pages/
And if you connect it to github repo, it auto-updates on push to main.
No connection. Just a happy user and a fan.
Saw a posting this past week on SSD drive failures. They’re blaming a lot of it on ‘over-logging’ – too much writing trivial, unnecessary data to logs. I imagine it gets worse when realtime data like OpenTelemetry get involved.
Until I saw that, never thought there was such a thing as ‘too much logging.’ Wonder if there are any ways around it, other than putting logs on spinny disks.
This was actually pretty normal for the last couple of tech jobs. Screening call with recruiter, phone screen with hiring manager, then a full-day on-site or zoom with 3-6 rounds. Sometimes, they would ask for a presentation about past work before the on-site. The number of rounds isn’t indicative of anything in big tech. Smaller companies do fewer and can’t afford to be as picky.
The problem is that after you’ve signed the offer and given your notice, you’re going on faith that the new company offer isn’t going to fall off. I’ve heard it happen a few times, usually when new company puts on a full hiring freeze.
It sucks, and the only way to mitigate it is to be talking to multiple companies when you decide to make a move.
Pfft. Anything smaller than 340 meters isn’t worth getting out of bed.
I mainly use it to create boilerplate (like adding a new REST API endpoint), or where I’m experimenting in a standalone project and am not sure how to do something (odd WebGL shaders), or when creating basic unit tests.
But letting it write, or rewrite existing code is very risky. It confidently makes mistakes, and rewrites entire sections of working code, which then breaks. It often goes into a “doom loop” making the same mistakes over and over. And if you tell it something it did was wrong and it should revert, it may not go back to exactly where you were. That’s where frequently snapshotting your working code into git is essential, and being able to reset multiple files back to a known state will save your butt.
Just yesterday, I had an idea for a WebGL experiment. Told it to add a panel to an existing testing app I run locally. It did and after a few iterations, got it working. But three other panels stopped working, because it decided to completely change some unrelated upstream declarations. Took 2x time to put everything back to where it was.
Another thing to consider is that every X units of time, you’ll want to go back and hand edit the generated material to clean up sloppy code. For example, inefficient data structures, duplicate functions in separate sections, unnecessarily verbose and obvious comments, etc. Also, better if using mature tech (with lots of training examples) vs. a new library or language.
If just starting out, I would not trust AI or vibe coding. Build things by hand and learn the fundamentals. There are no shortcuts. These things may look like super tools, but they give you a false sense of confidence. Get the slightest bit complex, and they fall apart and you will not know why.
Mainly using Cursor. Better results with Claude vs other LLMs, but still not perfect. Paid versions of both. Have also tried Cline with local codegen through Llama and Qwen. Not as good. Claude Code looks decent, but the open-ended cost is too scary for indie devs, unless you work for a company with deep pockets.