I would add to that that judges shouldn't be expected to have to verify that the source code of the TAS tools being used are legit, and produce a legit run.
Somebody would need to do some kind of verification or at least validation that the source code is legit, and is doing what it's supposed to do (and, preferably, that it's as bug-free as possible).
(In the same vein as in my previous post, one could ask if this has been done with currently accepted emulators. I don't know. Maybe not. But as stated, the situation is a bit different in that those emulators have been used for years, with hundreds of games, which gives a relatively high degree of confidence that they work correctly. Also, the very nature of them being generic emulators, rather than specific to one single game, makes it significantly less likely that they could be used to produce illegit output for a particular game.)
My problem with the proposal isn't even just the workload on judges, but it just doesn't make sense to me from an organizational standpoint.
Let me summarize the process as it is currently:
1. People make rerecording frameworks
2. Higher up staff members evaluate the frameworks and decide whether or not to accept it and implement it into the site.
3. If it is accepted, then it is implemented into the parser.
4. People submit TASes made on the framework.
5. Judges evaluate the TAS. If it is good, it is accepted. If it is not, it is rejected.
6. Encoders make encodes for accepted TASes.
7. The TAS is published.
Your proposal:
1. People make rerecording frameworks
2. People submit TASes made on the frameworks.
3. Judging? Something about a referee that is really vague. Are you suggesting that evaluating the framework would happen the same time as judging? What happens if the rerecording framework isn't acceptable? What happens if not enough information on how to run the TAS is given in the submission? Would we require the submitter to supply this information in addition to the information on the game itself?
Like I have so many questions that just aren't answered. It just makes so much more sense to me to have the "evaluation of rerecording framework" happen prior to submission. It just allows us to prepare for each submission much better. If you would like more structure to the evaluation of rerecording frameworks, I could support that. What I can't support is massive changes to the whole process without any details on how this would actually be implemented.
I think this is an interesting thread. My fear is that a lot of the work already done will likely get lost and there will be a gap in institutional knowledge . At the same time I can see that tasvideos is poorly equipped to deal with such fluid environments.
Maybe a separate website would be better suited to focusing all the work and getting it out for general distribution? Like TASLabs instead of TASVideos. At least you could forgoe all the formalities and document and categorize things as you please.
Oh well just a thought I had while reading through the thread.