Just some random idea when I took the shower early this morning. Maybe someone already did this. Maybe not.
Lots of us want machines to learn by themselves. This has been studied for years, but there's not a breakthrough yet. Could there be a fundamental problem that prevents us doing so?
When deep learning came out, people went crazy about it. This is possible the future of AI, people think. However, if you look closely, the underlying structure (the number of layers, their types, etc.) all relies on our decisions, the human's decisions. This is not AI.
What if we give the machine the flexibility to also change those structures. We provide the building blocks, and they learn on their own about where to use what.
I thought of Google's AutoML. What it does basically is to automatically try many combinations of models using its powerful backend.
This is dumb but cool. To train neural nets to train neural nets. However, the trained neural network would still be a fixed neural network, meaning it does not evolve.
The most dummy solution is to do similar things like AutoML does, with a reinforcement learning like closed-loop structure. So you want the neural network (that is trained to design other neural networks) to be able to refresh its own memory.
This trained network will be a building block for one particular tasks: designing another neural network for a particular action, like object detection, language translation, etc.
This structure is clearly a layered structure. While it makes sense, most of our thinking system is not really layered. It includes many possible cooperations in different areas of the brain. So if we somehow connect these trained neural networks together, that form a larger mixture rather than layers ones, then maybe the machine could have much more flexibility that enables it to "evolve" - to think on its own.