• 3 Posts
  • 186 Comments
Joined 1 年前
cake
Cake day: 2024年1月13日

help-circle


  • You need the chicken to be 165F or 74C to be food safe. It takes a long time to cook at 100-200C because the heat is being transferred much slower. If we’re using this instant slap-based cooking method, it only needs to get to the food safe temperature.

    Using the OP’s calculations and a cooked temperature of 74C:

    It would take 8315 average slaps

    or

    A slap at around 813m/s or 1819mph.

    *Edit for a correction to the second calculation (it still might be wrong), also, I rounded the numbers to whole integers.



  • But the front/hood is much shorter in length. Also, people driving that type of van are much more likely to be doing so in a professional capacity and are significantly less likely to be asshole drivers fucking around with their phone while driving. People are bad drivers at baseline quite frequently, but if someone is on the job in a van used for commercial purposes, they’re more likely to at least be paying attention and not speeding everywhere.

    Edit: I marked up your image to illustrate the point made much more eloquently in the video. Because of the length of the hood, the truck has a much longer distance of road obstructed from view in front of it, and this is with a standard truck that doesn’t have one of the very popular lift kits (and assuming that the driver is relatively tall.)











  • medgremlin@midwest.socialtoShowerthoughts@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    edit-2
    1 个月前

    The age group of children that gets put on leashes doesn’t have the brain development to feel shame or humiliation. Their brains have literally not developed the cortex that does that yet.

    From the age of about 2 to 4, my Dad made a harness out of climbing webbing for me and clipped the leash to a carabineer on his belt when we were out and about. We were constantly going to places like Haight St in San Francisco and hiking on the sea cliffs in Santa Cruz. I 100% would have gotten myself killed without that leash because I was very curious about the fishies in the ocean at the bottom of that 50-100ft high cliff, and my Dad was wrangling me and my sibling by himself while Mom was at work.

    I’m pretty sure there’s a picture somewhere of me leaning over a cliff being held back by the leash because I was a rambunctious little gremlin that was about 20 years off from having a fully developed frontal lobe. And I want to find that picture and share it with my friends because I think it’s hilarious.




  • Part of my significant suspicion regarding AI is that most of my medical experience and my intended specialty upon graduation is Emergency Medicine. The only thing AI might be useful for there is to function as a scribe. The AI is not going to tell me that the patient who denies any alcohol consumption smells like a liquor store, or that the patient that is completely unconscious has asterixis and flapping tremors. AI cannot tell me anything useful for my most critical patients, and for the less critical ones, I am perfectly capable of pulling up UpToDate or Dynamed and finding the thing I’m looking for myself. Maybe it can be useful for making suggestions for next steps, but for the initial evaluation? Nah. I don’t trust a glorified text predictor to catch the things that will kill my patients in the next 5 minutes.


  • My mistake, I recalled incorrectly. It got 83% wrong. https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/

    The chat interface is stupid in so many ways and I would hate using text to talk to a patient myself. There are so many non-verbal aspects of communication that are hard to teach to humans that would be impossible to teach to an AI. If you are familiar with people and know how to work with them, you can pick up on things like intonation and body language that can indicate that they didn’t actually understand the question and you need to rephrase it to get the information you need, or that there’s something the patient is uncomfortable about saying/asking. Or indications that they might be lying about things like sexual activity or substance use. And that’s not even getting into the part where AI’s can’t do a physical exam which may reveal things that the interview did not. This also ignores patients that can’t tell you what’s wrong because they are babies or they have an altered mental status or are unconscious. There are so many situations where an LLM is just completely fucking useless in the diagnostic process, and even more when you start talking about treatments that aren’t pills.

    Also, the exams are only one part of your evaluation to get through medical training. As a medical student and as a resident, your performance and interactions are constantly evaluated and examined to ensure that you are actually competent as a physician before you’re allowed to see patients without a supervising attending physician. For example, there was a student at my school that had almost perfect grades and passed the first board exam easily, but once he was in the room with real patients and interacting with the other medical staff, it became blatantly apparent that he had no business being in the medical field at all. He said and did things that were wildly inappropriate and was summarily expelled. If becoming a doctor was just a matter of passing the boards, he would have gotten through and likely would have been an actual danger to patients. Medicine is as much an art as it is a science, and the only way to test the art portion of it is through supervised practice until they are able to operate independently.


  • In order to tell it what is important, you would have to read the material to begin with. Also, the tests we took in class were in preparation for the board exams which can ask you about literally anything in medicine that you are expected to know. The amount of information involved here and the amount of details in the text that are important basically necessitate reading the text yourself and knowing how the information in that text relates to everything else you’ve read and learned.

    Trying to get the LLM to spit out an actually useful summary would be more time-consuming than just doing the reading to begin with.