You are viewing a single comment's thread from:

RE: Time for updates (v1.27.4 is here)

in Blockchain Wizardry2 years ago

It can give some useful responses, but I want to know what sources it used and any numerical answers need checking. I expect it will improve as these are very early days. It's just the hyped thing of the moment, but may become something we take for granted soon.

Sort:  

I suspect we'll have to await decentralization of the LLMs that is ongoing to avoid the 'safety' features of publicly available products, as they also seem to have malicious features built in. The report of the Vulture reporter that he was claimed to have died and the LLM supplied a completely fake URL to 'prove' it was not the result of unfortunately weighting text samples the LLM trained on.

That product clearly had malicious harm programmed into it's capabilities, and that's a problem for anyone that seeks to employ these products. I don't think these products can be used with confidence unless compiled by their users, and aeons of verification will need to sort their code.

I think they may be more incompetent than malicious.

I'm sure they're more one than the other, but that both are necessary to reflect demonstrated results.

There just isn't a mechanism whereby an LLM weighting training material can invent URL's to support a false, invented claim someone specific has died. Such a mechanism is additional to weighting text.