You are viewing a single comment's thread from:

RE: Time for updates (v1.27.4 is here)

in Blockchain Wizardry2 years ago

That's the problem, there are news and I fail to share, hoping that mythical somebody will do that instead.
(See The Story of Everybody, Somebody, Anybody And Nobody)

Good AI posts are ok as long as they are good.
Some were saying that Stackoverflow can be successfully replaced by ChatGPT, thing is, that it's way easier to spot an idiot on a traditional forum than an error in such well trained GPT output.

A good example is a discussion we had within the Core Dev, where GPT output was
x = 4,313,000,000,000,000,000 and it sound about right, big number that you would expect...
except that I've made a very simplified estimate that one can do within few seconds using basic calculator, that it is at least
x = 4,151,276,936,978,215,276,118,016
So human could give an answer that is 1000000x (ONE MILLION TIMES) more accurate.

Of course, as with any tool, you could make a great use of it, but you have to be smarter than a hammer to use a hammer.

Sort:  

It can give some useful responses, but I want to know what sources it used and any numerical answers need checking. I expect it will improve as these are very early days. It's just the hyped thing of the moment, but may become something we take for granted soon.

I suspect we'll have to await decentralization of the LLMs that is ongoing to avoid the 'safety' features of publicly available products, as they also seem to have malicious features built in. The report of the Vulture reporter that he was claimed to have died and the LLM supplied a completely fake URL to 'prove' it was not the result of unfortunately weighting text samples the LLM trained on.

That product clearly had malicious harm programmed into it's capabilities, and that's a problem for anyone that seeks to employ these products. I don't think these products can be used with confidence unless compiled by their users, and aeons of verification will need to sort their code.

I think they may be more incompetent than malicious.

I'm sure they're more one than the other, but that both are necessary to reflect demonstrated results.

There just isn't a mechanism whereby an LLM weighting training material can invent URL's to support a false, invented claim someone specific has died. Such a mechanism is additional to weighting text.