Hello, people! I opened the operational amplifier notebook (
https://github.com/idea-fasoc/OpenFASOC/blob/7dc5eb42cec94c02b74e72483df6fdc2b2603fb9/docs/source/notebooks/glayout/glayout_opamp.ipynb) and saw the topic on reinforcement learning (a topic I want to learn more about). In a first attempt to run it, there was an error, and I noticed that the files were using paths that were not updated. I made the following replacements in the files model.py, run_training.py, and eval.py:
#sys.path.append('../generators/gdsfactory-gen')
sys.path.append('../generators/glayout/glayout')
#sys.path.append('../generators/gdsfactory-gen/tapeout_and_RL')
sys.path.append('../generators/glayout/tapeout/tapeout_and_RL')
I triggered the training again, and it has been running for over 2h30m now. In this case, I generated the list with 50 samples (different from the 100 in the original). Another thing I notice is the repeated message shown in the attached image, without any additional results or progress in execution. I think there might be an issue. Is this normal?
Another question: The notebook mentions two types of specifications: optimized and capped. However, the calculated reward (according to the text) uses both. Shouldn't it be one or the other? And shouldn't there be an option to choose which type to use?