• 4 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle


  • Some additional info the article doesn’t address or skims over:

    The accounts were suspended for 3 months.

    They only suspended accounts that were overly abusing the system. Players that duped on accident, or a small number of times were not punished except for the removal of some of their in-game currency and maybe a ship or two that they bought with the earnings they made from duping.

    This is the first time that Star Citizen players have had a wave of suspensions like this for an exploit.

    This is most likely because of how this exploit affected the servers. In Star Citizen, abandoned ships stick around forever on a particular instance, so other players would need to hijack/tow/destroy/salvage them to get rid of them. The players abusing this exploit would duplicate ships with cargo (that could be sold) as fast as they possibly could, leaving more ships behind than what the servers can normally handle well.

    This also happened around the time of a free fly event where anyone could try out the game for a bit without having to pay. So the game wasn’t performing as well as it could have been during this event. Although, tbh, this game usually struggles during free flight events anyway.







  • Hmmm it was even able to pull in private DMs.

    Maybe private DMs on Mastadon aren’t as private as everyone thinks… that, or the open nature of Activity Pub is leaking them somehow?

    Edit - From the article:

    Even more shocking is the revelation that somehow, even private DMs from Mastodon were mirrored on their public site and searchable. How this is even possible is beyond me, as DM’s are ostensibly only between two parties, and the message itself was sent from two hackers.town users.

    From what @delirious_owl@discuss.online mentioned below, it sounds like this shouldn’t be very shocking at all.








  • There’s a place for AI in NPCs but developers will have to know how to implement it correctly or it will be a disaster.

    LLMs can be trained on specific characters and backstories, or even “types” of characters. If they are trained correctly they will stay in character as well as be reactive in more ways than any scripted character could ever do. But if the Devs are lazy and just hook it up to ChatGPT with a simple prompt telling it to “pretend” to be some character, then it’s going to be terrible like you say.

    Now, this won’t work very well for games where you’re trying to tell a story like Baldur’s Gate… instead this is better for more open world games where the player is interacting with random characters that don’t need to follow specific scripts.

    Even then it won’t be everything. Just because an LLM can say something “in-character” doesn’t mean it will line up with its in-game actions. So additional work will need to be made to help tie actions to the proper kind of responses.

    If a studio is able to do it right, this has game changing potential… but I’m sure we’ll see a lot of rushed work done before anyone pulls it off well.