I’m noticing I don’t have much sympathy for this view. I have plenty of empathy, it sounds very frustrating to have a tool taken away from you when you expected it would be around forever and made plans assuming it would be around forever!
To the extent that someone needs codex in order to reproduce results, to the extent that their results fail to generalize beyond codex, it sounds a lot like they were studying codex, they were not studying llms. Now that codex no longer exists there is no longer a need to study codex, so it’s not a great loss that this work can no longer be reproduced.
What you are missing is that contrary to what OpenAI says, Codex can’t be replaced by GPT-3.5 or GPT-4 for some research. Specifically, all GPT-3.5 and GPT-4 models are post-trained by InstructGPT-like post training. To study the effect of post training, you need the control without post training. What OpenAI did is to stop providing any access to its models without post training.
I’m noticing I don’t have much sympathy for this view. I have plenty of empathy, it sounds very frustrating to have a tool taken away from you when you expected it would be around forever and made plans assuming it would be around forever!
To the extent that someone needs codex in order to reproduce results, to the extent that their results fail to generalize beyond codex, it sounds a lot like they were studying codex, they were not studying llms. Now that codex no longer exists there is no longer a need to study codex, so it’s not a great loss that this work can no longer be reproduced.
Or am I missing something?
What you are missing is that contrary to what OpenAI says, Codex can’t be replaced by GPT-3.5 or GPT-4 for some research. Specifically, all GPT-3.5 and GPT-4 models are post-trained by InstructGPT-like post training. To study the effect of post training, you need the control without post training. What OpenAI did is to stop providing any access to its models without post training.
They already reversed this decision.