• realharo@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.

    That is something that can be learned. Follow-up questions can be asked to clarify (or even doubts - “are you sure you don’t mean Y instead?”). Etc. Not that complicated.

    (Could be why OpenAI chooses to degrade the experience so much when you disable chat history and training in ChatGPT 😀)

    Today’s LLMs have other quirks, like adding certain words can help even if they don’t change the meaning that much, etc., but that’s not some magic either.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      customers say X, they probably mean they want Y and Z

      Sure - an LLM can help catch some of those situations. But if anything it makes prompt engineering even more important.

      Sometimes the customer actually wants X, and a prompt engineer needs to predict this issue and disable the Y/Z behaviour. Prompt engineering is changing, but it’s not going away.