I could only make through about half way because that was a long arse article, but it just seemed like a more detailed version of the "surface level" explanations I hae seen elsewhere.
The most common misconception I have seen around neural networks is that they are somehow "learning" (man that's a seriously misused term in ML/DS) in the same that a human learns something, but this isn't really the case.
All a neural network is doing regardless of architecture, is just matching patterns of input to desired patterns of output, the types of outputs which we control.
All that's really happening with these large language models is that they are operating on large amounts of reasonably high quality (for the task at hand) data, with significant amounts of computing resources, this means even subtle patterns in the English language (such as when to use me and I in a sentence) can eventually (and some would say inevitably) be teased out.
You give the example of being able to pass the SAT as an example of how advanced AI are becoming, but it's not really, due to the nature of the SAT being a standardised test, in fact many humans will sit tests like these and pass in way not too dissimilar to the way an AI system would (at least superficially). They recognise that all the questions can be separated in to different types, and within those types they just change the specific values in the question.
You are right that there is a code reproducibility problem (places like """open"""AI aren't helping), but it's not even necessarily due to licensing per se, but rather due to the fact that the models (not too mention the datasets, I absolutley believe that NYT article when they said the training data for GPT-3 was ~700GB) are now so large, that basically only super computers can make any meaningful attempt to train them (which is why transfer learning and network pruning are rising in popularity), in fact I recall I think it was after BERT was trained, to prove a point, some researchers "crunched the numbers" to figure out how much CO2 was put into the air, although I can't seem to find that response anymore, so it may have been deleted.
If you really want to see what people involved with Free Software actually think about things like the GitHub Copilot, the Free Software Foundation recently did a call for
white papers on the topic all of which are rather short and easy reads. I would recommend you check them out.