No, I meant the interactive fiction.
The AI is cool as well though. I've been mentally wrestling with how one might simulate emotional responses in the last few days myself. Not for the usual fictional reason, but to help learning and creation. In science fiction they often have some android struggling with emotion, but I wonder if it might be far more important than we realize, and may need to be solved sometime soon. Not so we can have a computer cry because puppies, but so it can subjectively decide.
If you're interested, the interactive fiction The Entropy Cage is available online and for Android.
There's some theory that emotion is quick thinking and conscious thought is slow thinking. I'm not sure how I feel about that. The trick for building such AIs is coming up with a model that can be converted into code.
Perhaps it's simpler than I'm thinking. Perhaps it's a side effect of the complex reward system of the human brain, and something similar will come about in all significantly advanced general AI.
What are you ideas for emulating that in code? Our brains have complex reward systems as you say and also a few major parts. I thought the reward system was mostly tied to the simplest parts of our brain (the reptilian brain) with some ability to override that in the higher brain. We also have super-neurons to help with the think-fast. At least starting with bio-mimicry might lead somewhere.
(Please don't take this as me knowing anything particularly useful: I'm assembling words on the small odds that the conversation might spark off some useful inspiration)
I honestly have no idea. That's likely why my brain has been focused on trying to figure it out for days.
The "reward" system for a standard neural network is simple, from what I understand of it. It's just adjusting certain numbers in the neural net to make certain outcomes more likely. Our brain's reward system is far more complex. Although it does use various means to reinforce certain patterns, just like in an artificial neural net, there's also pleasure involved. I have absolutely no idea how we could possibly program pleasure into an AI or bot, yet I have a feeling it might actually be a necessity for certain AI tasks. Like when you make an AI that creates art. You could train it to create art based on famous artists. You could even use data on what people like, or how much they pay for different pieces of art, to make the AI more likely to create beautiful works of art that people will pay heaps for. You could even feed in data on the influences of different artists, so it could likewise create art that's based on certain famous paintings, while also being altogether different. But how do you teach it to "like" certain things? How do you teach it to enjoy certain artwork, and use that to create it's own?
If you know that "pleasure"/[insert label here] is involved then you can make that one of the things that your AI optimises for. You could use a GAN where the Adversarial network outputs not just a guess on real/fake, but also a guess on "pleasure". Supply the neural network some information about pleasure and you might have something.
I have some ideas on how to identify aesthetic experiences based on introspection, but we'll see what some AI experts I know have to say about that before I talk too much more about it.
No. It's an embryonic thought experiment for now. I'm going to run it past some AI researchers before I try to implement it.
No, I meant the interactive fiction.
The AI is cool as well though. I've been mentally wrestling with how one might simulate emotional responses in the last few days myself. Not for the usual fictional reason, but to help learning and creation. In science fiction they often have some android struggling with emotion, but I wonder if it might be far more important than we realize, and may need to be solved sometime soon. Not so we can have a computer cry because puppies, but so it can subjectively decide.
If you're interested, the interactive fiction The Entropy Cage is available online and for Android.
There's some theory that emotion is quick thinking and conscious thought is slow thinking. I'm not sure how I feel about that. The trick for building such AIs is coming up with a model that can be converted into code.
Perhaps it's simpler than I'm thinking. Perhaps it's a side effect of the complex reward system of the human brain, and something similar will come about in all significantly advanced general AI.
What are you ideas for emulating that in code? Our brains have complex reward systems as you say and also a few major parts. I thought the reward system was mostly tied to the simplest parts of our brain (the reptilian brain) with some ability to override that in the higher brain. We also have super-neurons to help with the think-fast. At least starting with bio-mimicry might lead somewhere.
(Please don't take this as me knowing anything particularly useful: I'm assembling words on the small odds that the conversation might spark off some useful inspiration)
I honestly have no idea. That's likely why my brain has been focused on trying to figure it out for days.
The "reward" system for a standard neural network is simple, from what I understand of it. It's just adjusting certain numbers in the neural net to make certain outcomes more likely. Our brain's reward system is far more complex. Although it does use various means to reinforce certain patterns, just like in an artificial neural net, there's also pleasure involved. I have absolutely no idea how we could possibly program pleasure into an AI or bot, yet I have a feeling it might actually be a necessity for certain AI tasks. Like when you make an AI that creates art. You could train it to create art based on famous artists. You could even use data on what people like, or how much they pay for different pieces of art, to make the AI more likely to create beautiful works of art that people will pay heaps for. You could even feed in data on the influences of different artists, so it could likewise create art that's based on certain famous paintings, while also being altogether different. But how do you teach it to "like" certain things? How do you teach it to enjoy certain artwork, and use that to create it's own?
If you know that "pleasure"/[insert label here] is involved then you can make that one of the things that your AI optimises for. You could use a GAN where the Adversarial network outputs not just a guess on real/fake, but also a guess on "pleasure". Supply the neural network some information about pleasure and you might have something.
I have some ideas on how to identify aesthetic experiences based on introspection, but we'll see what some AI experts I know have to say about that before I talk too much more about it.