• 4 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle


  • Z4rK@lemmy.world
    cake
    tomemes@lemmy.worldGreat job
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    Alas. They have said they plan to open some of the source and potentially everything, but it’s little progress.

    They recently ported to Linux, which I think will give them much more negative feedback here, so hopefully with more pressure they’ll find the correct copy left license and open up their source to build trust.


  • Z4rK@lemmy.world
    cake
    tomemes@lemmy.worldGreat job
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    There are two modes of AI integrations. The first is a standard LLM in a side panel. It’s search and learning directly in the terminal, with the commands I need directly available to run where I need them. What you get is the same as if you used ChatGPT to answer your questions, then copied the part of the answer you needed to your terminal and run it.

    There is also AI Command Suggestion, where you’ll start to type a command / search prefixed by # and get commands directly back to run. It’s quite different from auto-complete (there is very good auto-complete and command suggestion as well, I’m just talking about the AI specific features here).

    https://www.warp.dev/warp-ai

    It’s just a convenient placement of AI at your fingertips when working in the terminal.


  • Z4rK@lemmy.world
    cake
    tomemes@lemmy.worldGreat job
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Warp.dev! It’s the best terminal I’ve used so far, and the best use of AI as well! It’s extremely useful with some AI help for the thousands of small commands you know exist but rarely uses. And it’s very well implemented.








  • Z4rK@lemmy.world
    cake
    toFunny: Home of the Haha@lemmy.worldGood effort
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    All these examples are not just using stable diffusion though. They are using an LLM to create a generative image prompt for DALL-E / SD, which then gets executed. In none of these examples are we shown the actual prompt.

    If you instead instruct the LLM to first show the text prompt, review it and make sure the prompt does not include any elephants, revise it if necessary, then generate the image, you’ll get much better results. Now, ChatGPT is horrible in following instructions like these if you don’t set up the prompt very specifically, but it will still follow more of the instructions internally.

    Anyway, the issue in all the examples above does not stem from stable diffusion, but from the LLM generating an ineffective prompt to the stable diffusion algorithm by attempting to include some simple negative word for elephants, which does not work well.









  • I think part of the issue stems from Lemmy not having a good way of tracking a topic / community defined on multiple instances, so you have to track a community on each instance. People want the most active one, so they track the one on LW since it has the most members. And since they track mostly communities on LW it also makes sense to just use LW as the primary.

    If Lemmy could have some inbuilt support for tags or subscribing to a topic / multi-instance community, I think people could feel less inclined of defaulting to the largest instance.