We are keeping a list of AI/ML related links from research to more accessible items, hoping to share some of the more accessible posts with a wider community!

  • manitcor@lemmy.intai.techOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    Not wrong, but not entirely right, it does not “know” and is not fundamentally capable of “analysis” they way we describe it. Its a weighted tokenmap, what makes it special is how those weights were derived. There are papers on self-reflection that are essentially processes in mapping those weights. You can assess the probabilities of your token response in the API as well.

    What’s interesting is that sufficiently large models demonstrate the ability to build upon themselves using similar techniques applied in training and in fine tuning right in the prompt. This means you can use a combination of reflective conversation and embedding to create prompts that act like fine-tuned agents. Great for fast prototyping and cheaper than a fine tune run!