You may remember this image from my early experiments with prompt-based image making. It was called "forty sausages hanging on a washing line in Birmingham" and it came out quite close to my imagination.
Today I asked the OpenAI chatbot what hanging forty sausages on a washing line in Birmingham might mean.
It said "Hanging forty sausages on a washing line in Birmingham would not necessarily have a specific meaning or significance. It would simply be an unusual and unexpected sight that might attract attention and curiosity. The meaning or significance of the act would depend on the intentions and motivations of the person who hung the sausages, as well as the reactions and interpretations of the people who saw them.
If the person who hung the sausages was trying to make a statement or communicate a message, then the act might have a deeper meaning or significance. For example, if the sausages were hung as part of a protest or an art installation, they might be intended to symbolize something specific, such as the impact of factory farming on the environment or the absurdity of consumer culture.
However, if the person who hung the sausages was simply doing it for fun or as a joke, then the act might not have any deeper meaning or significance beyond the initial surprise and amusement it caused. In this case, the meaning or significance of the act would be largely subjective and would depend on the individual interpretations of the people who saw it."
Once you spend some time using this model it becomes as easy to recognise the patterns in style and approach that characterise the responses. At least it feels easy to a human. I'm interested now in how machines might be able to identify themselves. We're going to need something round here if "proof of brain" morphs into "proof of AI".
I've seen a lot of talk about the chat bot, but not played with it myself. I have looked at the image one and it can be handy to generate a new image for a post. The software is getting better at some form of understanding what people write, but what does it actually know? It will keep improving, but could it ever be considered aware? That would create all sorts of issues.
There will no doubt be other developers working on detecting machine generated content.
Are these the 'interesting times' they talked about?